Apr 17 23:26:51.901457 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 17 23:26:51.901490 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 17 22:13:49 -00 2026 Apr 17 23:26:51.901503 kernel: KASLR enabled Apr 17 23:26:51.901509 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 17 23:26:51.901515 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 17 23:26:51.901521 kernel: random: crng init done Apr 17 23:26:51.901529 kernel: ACPI: Early table checksum verification disabled Apr 17 23:26:51.901535 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 17 23:26:51.901542 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 17 23:26:51.901549 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:26:51.901556 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:26:51.901562 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:26:51.901568 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:26:51.901575 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:26:51.901582 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:26:51.901590 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:26:51.901597 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:26:51.901604 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:26:51.901611 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 23:26:51.901617 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 17 23:26:51.901624 kernel: NUMA: Failed to initialise from firmware Apr 17 23:26:51.901630 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 17 23:26:51.901637 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Apr 17 23:26:51.901643 kernel: Zone ranges: Apr 17 23:26:51.901650 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 17 23:26:51.901658 kernel: DMA32 empty Apr 17 23:26:51.901665 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 17 23:26:51.901672 kernel: Movable zone start for each node Apr 17 23:26:51.901678 kernel: Early memory node ranges Apr 17 23:26:51.901685 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 17 23:26:51.901691 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 17 23:26:51.901698 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 17 23:26:51.901704 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 17 23:26:51.901711 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 17 23:26:51.901718 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 17 23:26:51.901724 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 17 23:26:51.901731 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 17 23:26:51.901739 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 17 23:26:51.901746 kernel: psci: probing for conduit method from ACPI. Apr 17 23:26:51.901753 kernel: psci: PSCIv1.1 detected in firmware. Apr 17 23:26:51.901763 kernel: psci: Using standard PSCI v0.2 function IDs Apr 17 23:26:51.901770 kernel: psci: Trusted OS migration not required Apr 17 23:26:51.901777 kernel: psci: SMC Calling Convention v1.1 Apr 17 23:26:51.901785 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 17 23:26:51.901793 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 17 23:26:51.901800 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 17 23:26:51.901807 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 17 23:26:51.901814 kernel: Detected PIPT I-cache on CPU0 Apr 17 23:26:51.901822 kernel: CPU features: detected: GIC system register CPU interface Apr 17 23:26:51.901829 kernel: CPU features: detected: Hardware dirty bit management Apr 17 23:26:51.901836 kernel: CPU features: detected: Spectre-v4 Apr 17 23:26:51.901843 kernel: CPU features: detected: Spectre-BHB Apr 17 23:26:51.901850 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 17 23:26:51.901858 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 17 23:26:51.901865 kernel: CPU features: detected: ARM erratum 1418040 Apr 17 23:26:51.901872 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 17 23:26:51.901879 kernel: alternatives: applying boot alternatives Apr 17 23:26:51.901887 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:26:51.901894 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:26:51.901901 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:26:51.901908 kernel: Fallback order for Node 0: 0 Apr 17 23:26:51.901915 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 17 23:26:51.901922 kernel: Policy zone: Normal Apr 17 23:26:51.901931 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:26:51.901940 kernel: software IO TLB: area num 2. Apr 17 23:26:51.901947 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 17 23:26:51.901955 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Apr 17 23:26:51.901962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:26:51.901969 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:26:51.901977 kernel: rcu: RCU event tracing is enabled. Apr 17 23:26:51.901984 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:26:51.901991 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:26:51.902026 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:26:51.902034 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:26:51.902042 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:26:51.902049 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 17 23:26:51.903120 kernel: GICv3: 256 SPIs implemented Apr 17 23:26:51.903139 kernel: GICv3: 0 Extended SPIs implemented Apr 17 23:26:51.903146 kernel: Root IRQ handler: gic_handle_irq Apr 17 23:26:51.903154 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 17 23:26:51.903161 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 17 23:26:51.903168 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 17 23:26:51.903176 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 17 23:26:51.903183 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 17 23:26:51.903190 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 17 23:26:51.903198 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 17 23:26:51.903205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:26:51.903219 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 17 23:26:51.903226 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 17 23:26:51.903240 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 17 23:26:51.903248 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 17 23:26:51.903255 kernel: Console: colour dummy device 80x25 Apr 17 23:26:51.903263 kernel: ACPI: Core revision 20230628 Apr 17 23:26:51.903270 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 17 23:26:51.903278 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:26:51.903285 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:26:51.903293 kernel: landlock: Up and running. Apr 17 23:26:51.903302 kernel: SELinux: Initializing. Apr 17 23:26:51.903310 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:26:51.903317 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:26:51.903324 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:26:51.903332 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:26:51.903339 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:26:51.903347 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:26:51.903354 kernel: Platform MSI: ITS@0x8080000 domain created Apr 17 23:26:51.903361 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 17 23:26:51.903370 kernel: Remapping and enabling EFI services. Apr 17 23:26:51.903377 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:26:51.903384 kernel: Detected PIPT I-cache on CPU1 Apr 17 23:26:51.903392 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 17 23:26:51.903399 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 17 23:26:51.903406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 17 23:26:51.903413 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 17 23:26:51.903421 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:26:51.903428 kernel: SMP: Total of 2 processors activated. Apr 17 23:26:51.903435 kernel: CPU features: detected: 32-bit EL0 Support Apr 17 23:26:51.903444 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 17 23:26:51.903452 kernel: CPU features: detected: Common not Private translations Apr 17 23:26:51.903464 kernel: CPU features: detected: CRC32 instructions Apr 17 23:26:51.903473 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 17 23:26:51.903481 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 17 23:26:51.903488 kernel: CPU features: detected: LSE atomic instructions Apr 17 23:26:51.903496 kernel: CPU features: detected: Privileged Access Never Apr 17 23:26:51.903504 kernel: CPU features: detected: RAS Extension Support Apr 17 23:26:51.903513 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 17 23:26:51.903521 kernel: CPU: All CPU(s) started at EL1 Apr 17 23:26:51.903528 kernel: alternatives: applying system-wide alternatives Apr 17 23:26:51.903536 kernel: devtmpfs: initialized Apr 17 23:26:51.903544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:26:51.903551 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:26:51.903559 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:26:51.903566 kernel: SMBIOS 3.0.0 present. Apr 17 23:26:51.903576 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 17 23:26:51.903583 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:26:51.903591 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 17 23:26:51.903599 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 17 23:26:51.903606 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 17 23:26:51.903614 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:26:51.903622 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Apr 17 23:26:51.903630 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:26:51.903637 kernel: cpuidle: using governor menu Apr 17 23:26:51.903647 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 17 23:26:51.903654 kernel: ASID allocator initialised with 32768 entries Apr 17 23:26:51.903662 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:26:51.903670 kernel: Serial: AMBA PL011 UART driver Apr 17 23:26:51.903677 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 17 23:26:51.903685 kernel: Modules: 0 pages in range for non-PLT usage Apr 17 23:26:51.903693 kernel: Modules: 509008 pages in range for PLT usage Apr 17 23:26:51.903700 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:26:51.903708 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:26:51.903717 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 17 23:26:51.903724 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 17 23:26:51.903732 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:26:51.903740 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:26:51.903747 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 17 23:26:51.903755 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 17 23:26:51.903762 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:26:51.903770 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:26:51.903777 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:26:51.903786 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:26:51.903794 kernel: ACPI: Interpreter enabled Apr 17 23:26:51.903802 kernel: ACPI: Using GIC for interrupt routing Apr 17 23:26:51.903809 kernel: ACPI: MCFG table detected, 1 entries Apr 17 23:26:51.903817 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 17 23:26:51.903824 kernel: printk: console [ttyAMA0] enabled Apr 17 23:26:51.903832 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:26:51.904011 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:26:51.905289 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 17 23:26:51.905388 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 17 23:26:51.905456 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 17 23:26:51.905524 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 17 23:26:51.905534 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 17 23:26:51.905543 kernel: PCI host bridge to bus 0000:00 Apr 17 23:26:51.905620 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 17 23:26:51.905682 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 17 23:26:51.905748 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 17 23:26:51.905808 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:26:51.905895 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 17 23:26:51.905974 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 17 23:26:51.907159 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 17 23:26:51.907283 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 17 23:26:51.907374 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 17 23:26:51.907445 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 17 23:26:51.907523 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 17 23:26:51.907593 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 17 23:26:51.907672 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 17 23:26:51.907741 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 17 23:26:51.907826 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 17 23:26:51.907894 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 17 23:26:51.907969 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 17 23:26:51.909202 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 17 23:26:51.909315 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 17 23:26:51.909386 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 17 23:26:51.909473 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 17 23:26:51.909543 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 17 23:26:51.909622 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 17 23:26:51.909691 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 17 23:26:51.909768 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 17 23:26:51.909839 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 17 23:26:51.909943 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 17 23:26:51.910051 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 17 23:26:51.911270 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 17 23:26:51.911346 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 17 23:26:51.911418 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 17 23:26:51.911488 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 17 23:26:51.911577 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 17 23:26:51.911660 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 17 23:26:51.911740 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 17 23:26:51.911814 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 17 23:26:51.913153 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 17 23:26:51.913305 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 17 23:26:51.913377 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 17 23:26:51.913457 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 17 23:26:51.913535 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 17 23:26:51.913605 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 17 23:26:51.913683 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 17 23:26:51.913754 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 17 23:26:51.913823 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 17 23:26:51.913909 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 17 23:26:51.913980 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 17 23:26:51.914813 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 17 23:26:51.914909 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 17 23:26:51.914986 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 17 23:26:51.915097 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 17 23:26:51.915172 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 17 23:26:51.915255 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 17 23:26:51.915325 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 17 23:26:51.915393 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 17 23:26:51.915468 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 17 23:26:51.915536 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 17 23:26:51.915603 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 17 23:26:51.915675 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 17 23:26:51.915744 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 17 23:26:51.915816 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 17 23:26:51.915888 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 17 23:26:51.915956 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 17 23:26:51.916091 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 17 23:26:51.916176 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 17 23:26:51.916249 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 17 23:26:51.916317 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 17 23:26:51.916398 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 17 23:26:51.916467 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 17 23:26:51.916534 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 17 23:26:51.916608 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 17 23:26:51.916676 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 17 23:26:51.916743 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 17 23:26:51.916816 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 17 23:26:51.916885 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 17 23:26:51.916955 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 17 23:26:51.917046 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 17 23:26:51.919213 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 17 23:26:51.919306 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 17 23:26:51.919378 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 17 23:26:51.919452 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 17 23:26:51.919524 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 17 23:26:51.919604 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 17 23:26:51.919675 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 17 23:26:51.919748 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 17 23:26:51.919819 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 17 23:26:51.919891 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 17 23:26:51.919962 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 17 23:26:51.920085 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 17 23:26:51.920165 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 17 23:26:51.920240 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 17 23:26:51.920309 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 17 23:26:51.920384 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 17 23:26:51.920454 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 17 23:26:51.920528 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 17 23:26:51.920604 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 17 23:26:51.920674 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 17 23:26:51.920742 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 17 23:26:51.920813 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 17 23:26:51.920881 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 17 23:26:51.920953 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 17 23:26:51.921037 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 17 23:26:51.921174 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 17 23:26:51.921256 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 17 23:26:51.921328 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 17 23:26:51.921396 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 17 23:26:51.921465 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 17 23:26:51.921533 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 17 23:26:51.921603 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 17 23:26:51.921670 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 17 23:26:51.921739 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 17 23:26:51.921810 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 17 23:26:51.921880 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 17 23:26:51.921948 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 17 23:26:51.922235 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 17 23:26:51.922350 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 17 23:26:51.922423 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 17 23:26:51.922491 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 17 23:26:51.922559 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 17 23:26:51.922635 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 17 23:26:51.922722 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 17 23:26:51.922792 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 17 23:26:51.922868 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 17 23:26:51.922943 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 17 23:26:51.923023 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 17 23:26:51.923111 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 17 23:26:51.923179 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 17 23:26:51.923256 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 17 23:26:51.923329 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 17 23:26:51.923400 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 17 23:26:51.923469 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 17 23:26:51.923540 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 17 23:26:51.923609 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 17 23:26:51.923687 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 17 23:26:51.923760 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 17 23:26:51.923827 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 17 23:26:51.923897 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 17 23:26:51.923968 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 17 23:26:51.924116 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 17 23:26:51.924202 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 17 23:26:51.924274 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 17 23:26:51.924341 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 17 23:26:51.924407 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 17 23:26:51.924474 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 17 23:26:51.924551 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 17 23:26:51.924623 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 17 23:26:51.924693 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 17 23:26:51.924763 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 17 23:26:51.924830 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 17 23:26:51.924898 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 17 23:26:51.924973 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 17 23:26:51.925087 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 17 23:26:51.925169 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 17 23:26:51.925241 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 17 23:26:51.925310 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 17 23:26:51.925383 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 17 23:26:51.925450 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 17 23:26:51.925521 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 17 23:26:51.925589 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 17 23:26:51.925657 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 17 23:26:51.925724 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 17 23:26:51.925794 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 17 23:26:51.925865 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 17 23:26:51.925935 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 17 23:26:51.926016 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 17 23:26:51.926108 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 17 23:26:51.926174 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 17 23:26:51.926235 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 17 23:26:51.926311 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 17 23:26:51.926376 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 17 23:26:51.926445 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 17 23:26:51.926518 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 17 23:26:51.926581 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 17 23:26:51.926643 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 17 23:26:51.926715 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 17 23:26:51.926803 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 17 23:26:51.926873 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 17 23:26:51.926948 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 17 23:26:51.927111 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 17 23:26:51.927217 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 17 23:26:51.927292 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 17 23:26:51.927355 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 17 23:26:51.927416 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 17 23:26:51.927491 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 17 23:26:51.927553 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 17 23:26:51.927620 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 17 23:26:51.927694 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 17 23:26:51.927768 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 17 23:26:51.927836 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 17 23:26:51.927908 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 17 23:26:51.927972 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 17 23:26:51.928067 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 17 23:26:51.928161 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 17 23:26:51.928228 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 17 23:26:51.928298 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 17 23:26:51.928308 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 17 23:26:51.928317 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 17 23:26:51.928325 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 17 23:26:51.928333 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 17 23:26:51.928341 kernel: iommu: Default domain type: Translated Apr 17 23:26:51.928349 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 17 23:26:51.928358 kernel: efivars: Registered efivars operations Apr 17 23:26:51.928365 kernel: vgaarb: loaded Apr 17 23:26:51.928376 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 17 23:26:51.928384 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:26:51.928393 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:26:51.928401 kernel: pnp: PnP ACPI init Apr 17 23:26:51.928480 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 17 23:26:51.928492 kernel: pnp: PnP ACPI: found 1 devices Apr 17 23:26:51.928500 kernel: NET: Registered PF_INET protocol family Apr 17 23:26:51.928508 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:26:51.928518 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:26:51.928526 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:26:51.928535 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:26:51.928543 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:26:51.928551 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:26:51.928559 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:26:51.928567 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:26:51.928575 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:26:51.928656 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 17 23:26:51.928671 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:26:51.928679 kernel: kvm [1]: HYP mode not available Apr 17 23:26:51.928687 kernel: Initialise system trusted keyrings Apr 17 23:26:51.928695 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:26:51.928703 kernel: Key type asymmetric registered Apr 17 23:26:51.928711 kernel: Asymmetric key parser 'x509' registered Apr 17 23:26:51.928719 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 23:26:51.928727 kernel: io scheduler mq-deadline registered Apr 17 23:26:51.928735 kernel: io scheduler kyber registered Apr 17 23:26:51.928745 kernel: io scheduler bfq registered Apr 17 23:26:51.928754 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 17 23:26:51.928827 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 17 23:26:51.928897 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 17 23:26:51.928965 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:26:51.929052 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 17 23:26:51.929138 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 17 23:26:51.929213 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:26:51.929287 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 17 23:26:51.929358 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 17 23:26:51.929428 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:26:51.929501 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 17 23:26:51.929571 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 17 23:26:51.929643 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:26:51.929717 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 17 23:26:51.929789 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 17 23:26:51.929860 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:26:51.929936 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 17 23:26:51.930044 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 17 23:26:51.930146 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:26:51.930220 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 17 23:26:51.930288 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 17 23:26:51.930357 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:26:51.930429 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 17 23:26:51.930499 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 17 23:26:51.930571 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:26:51.930583 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 17 23:26:51.930658 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 17 23:26:51.930728 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 17 23:26:51.930797 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:26:51.930808 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 17 23:26:51.930818 kernel: ACPI: button: Power Button [PWRB] Apr 17 23:26:51.930827 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 17 23:26:51.930903 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 17 23:26:51.930980 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 17 23:26:51.930992 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:26:51.931013 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 17 23:26:51.933461 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 17 23:26:51.933498 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 17 23:26:51.933507 kernel: thunder_xcv, ver 1.0 Apr 17 23:26:51.933524 kernel: thunder_bgx, ver 1.0 Apr 17 23:26:51.933535 kernel: nicpf, ver 1.0 Apr 17 23:26:51.933543 kernel: nicvf, ver 1.0 Apr 17 23:26:51.933636 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 17 23:26:51.933703 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-17T23:26:51 UTC (1776468411) Apr 17 23:26:51.933713 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:26:51.933722 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 17 23:26:51.933730 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 17 23:26:51.933741 kernel: watchdog: Hard watchdog permanently disabled Apr 17 23:26:51.933749 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:26:51.933757 kernel: Segment Routing with IPv6 Apr 17 23:26:51.933765 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:26:51.933772 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:26:51.933780 kernel: Key type dns_resolver registered Apr 17 23:26:51.933789 kernel: registered taskstats version 1 Apr 17 23:26:51.933797 kernel: Loading compiled-in X.509 certificates Apr 17 23:26:51.933805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 1161289bfc8d953baa9f687fefeecf0e077bc535' Apr 17 23:26:51.933815 kernel: Key type .fscrypt registered Apr 17 23:26:51.933823 kernel: Key type fscrypt-provisioning registered Apr 17 23:26:51.933830 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:26:51.933838 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:26:51.933846 kernel: ima: No architecture policies found Apr 17 23:26:51.933854 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 17 23:26:51.933862 kernel: clk: Disabling unused clocks Apr 17 23:26:51.933870 kernel: Freeing unused kernel memory: 39424K Apr 17 23:26:51.933878 kernel: Run /init as init process Apr 17 23:26:51.933886 kernel: with arguments: Apr 17 23:26:51.933896 kernel: /init Apr 17 23:26:51.933904 kernel: with environment: Apr 17 23:26:51.933911 kernel: HOME=/ Apr 17 23:26:51.933919 kernel: TERM=linux Apr 17 23:26:51.933929 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:26:51.933939 systemd[1]: Detected virtualization kvm. Apr 17 23:26:51.933948 systemd[1]: Detected architecture arm64. Apr 17 23:26:51.933957 systemd[1]: Running in initrd. Apr 17 23:26:51.933966 systemd[1]: No hostname configured, using default hostname. Apr 17 23:26:51.933974 systemd[1]: Hostname set to . Apr 17 23:26:51.933982 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:26:51.933990 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:26:51.934043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:26:51.934054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:26:51.934077 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:26:51.934090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:26:51.934099 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:26:51.934108 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:26:51.934118 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:26:51.934126 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:26:51.934135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:26:51.934147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:26:51.934159 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:26:51.934168 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:26:51.934177 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:26:51.934185 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:26:51.934193 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:26:51.934202 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:26:51.934210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:26:51.934219 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:26:51.934228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:26:51.934239 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:26:51.934248 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:26:51.934258 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:26:51.934266 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:26:51.934275 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:26:51.934284 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:26:51.934292 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:26:51.934301 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:26:51.934311 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:26:51.934320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:26:51.934328 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:26:51.934365 systemd-journald[237]: Collecting audit messages is disabled. Apr 17 23:26:51.934388 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:26:51.934396 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:26:51.934406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:26:51.934414 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:26:51.934424 kernel: Bridge firewalling registered Apr 17 23:26:51.934434 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:26:51.934442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:26:51.934451 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:26:51.934460 systemd-journald[237]: Journal started Apr 17 23:26:51.934480 systemd-journald[237]: Runtime Journal (/run/log/journal/4ee20eafb15b4f0fab7ef147dbe8eb91) is 8.0M, max 76.6M, 68.6M free. Apr 17 23:26:51.893945 systemd-modules-load[238]: Inserted module 'overlay' Apr 17 23:26:51.917134 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 17 23:26:51.953084 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:26:51.955283 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:26:51.958680 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:26:51.965951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:26:51.974655 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:26:51.976243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:26:51.982448 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:26:51.983586 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:26:51.989302 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:26:52.004154 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:26:52.012726 dracut-cmdline[272]: dracut-dracut-053 Apr 17 23:26:52.013652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:26:52.017096 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:26:52.048219 systemd-resolved[281]: Positive Trust Anchors: Apr 17 23:26:52.048236 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:26:52.048269 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:26:52.058564 systemd-resolved[281]: Defaulting to hostname 'linux'. Apr 17 23:26:52.060770 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:26:52.061575 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:26:52.110077 kernel: SCSI subsystem initialized Apr 17 23:26:52.114094 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:26:52.122126 kernel: iscsi: registered transport (tcp) Apr 17 23:26:52.136294 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:26:52.136359 kernel: QLogic iSCSI HBA Driver Apr 17 23:26:52.182590 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:26:52.191327 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:26:52.209270 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:26:52.209366 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:26:52.209392 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:26:52.258115 kernel: raid6: neonx8 gen() 15713 MB/s Apr 17 23:26:52.275108 kernel: raid6: neonx4 gen() 15562 MB/s Apr 17 23:26:52.292107 kernel: raid6: neonx2 gen() 13157 MB/s Apr 17 23:26:52.309116 kernel: raid6: neonx1 gen() 10426 MB/s Apr 17 23:26:52.326113 kernel: raid6: int64x8 gen() 6921 MB/s Apr 17 23:26:52.343140 kernel: raid6: int64x4 gen() 7302 MB/s Apr 17 23:26:52.360119 kernel: raid6: int64x2 gen() 6098 MB/s Apr 17 23:26:52.377113 kernel: raid6: int64x1 gen() 5012 MB/s Apr 17 23:26:52.377185 kernel: raid6: using algorithm neonx8 gen() 15713 MB/s Apr 17 23:26:52.394135 kernel: raid6: .... xor() 11904 MB/s, rmw enabled Apr 17 23:26:52.394221 kernel: raid6: using neon recovery algorithm Apr 17 23:26:52.399436 kernel: xor: measuring software checksum speed Apr 17 23:26:52.399508 kernel: 8regs : 19793 MB/sec Apr 17 23:26:52.399532 kernel: 32regs : 18573 MB/sec Apr 17 23:26:52.399554 kernel: arm64_neon : 27079 MB/sec Apr 17 23:26:52.400211 kernel: xor: using function: arm64_neon (27079 MB/sec) Apr 17 23:26:52.451130 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:26:52.465796 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:26:52.472346 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:26:52.485644 systemd-udevd[457]: Using default interface naming scheme 'v255'. Apr 17 23:26:52.489119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:26:52.498310 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:26:52.513330 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Apr 17 23:26:52.547916 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:26:52.554300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:26:52.607054 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:26:52.614783 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:26:52.634438 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:26:52.636594 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:26:52.638768 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:26:52.640309 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:26:52.647355 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:26:52.670424 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:26:52.742862 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:26:52.744599 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:26:52.744640 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 17 23:26:52.752110 kernel: ACPI: bus type USB registered Apr 17 23:26:52.752335 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:26:52.753764 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:26:52.756243 kernel: usbcore: registered new interface driver usbfs Apr 17 23:26:52.756301 kernel: usbcore: registered new interface driver hub Apr 17 23:26:52.756314 kernel: usbcore: registered new device driver usb Apr 17 23:26:52.757598 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:26:52.758522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:26:52.758766 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:26:52.760876 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:26:52.767447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:26:52.786353 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:26:52.788436 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 17 23:26:52.791224 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 17 23:26:52.791442 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:26:52.794074 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:26:52.794483 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:26:52.801243 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 17 23:26:52.801504 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 17 23:26:52.804188 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 17 23:26:52.804392 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 17 23:26:52.804480 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 17 23:26:52.804572 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 17 23:26:52.806074 kernel: hub 1-0:1.0: USB hub found Apr 17 23:26:52.807481 kernel: hub 1-0:1.0: 4 ports detected Apr 17 23:26:52.807672 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 17 23:26:52.810083 kernel: hub 2-0:1.0: USB hub found Apr 17 23:26:52.810237 kernel: hub 2-0:1.0: 4 ports detected Apr 17 23:26:52.817368 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 17 23:26:52.819191 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 17 23:26:52.821519 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 17 23:26:52.822307 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 17 23:26:52.822479 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:26:52.822407 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:26:52.827238 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:26:52.827278 kernel: GPT:17805311 != 80003071 Apr 17 23:26:52.827288 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:26:52.828327 kernel: GPT:17805311 != 80003071 Apr 17 23:26:52.828360 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:26:52.829259 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:26:52.832102 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 17 23:26:52.863094 kernel: BTRFS: device fsid 6218981f-ef91-4196-be05-d5f6a224b350 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (509) Apr 17 23:26:52.875081 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (515) Apr 17 23:26:52.876748 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 17 23:26:52.888600 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 17 23:26:52.894921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:26:52.900509 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 17 23:26:52.902533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 17 23:26:52.913080 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:26:52.919038 disk-uuid[574]: Primary Header is updated. Apr 17 23:26:52.919038 disk-uuid[574]: Secondary Entries is updated. Apr 17 23:26:52.919038 disk-uuid[574]: Secondary Header is updated. Apr 17 23:26:52.927086 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:26:52.932082 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:26:52.936129 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:26:53.047237 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 17 23:26:53.180748 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 17 23:26:53.180864 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 17 23:26:53.181288 kernel: usbcore: registered new interface driver usbhid Apr 17 23:26:53.181338 kernel: usbhid: USB HID core driver Apr 17 23:26:53.289127 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 17 23:26:53.419095 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 17 23:26:53.473125 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 17 23:26:53.943130 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:26:53.944090 disk-uuid[575]: The operation has completed successfully. Apr 17 23:26:53.988724 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:26:53.989617 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:26:54.010358 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:26:54.015397 sh[593]: Success Apr 17 23:26:54.029098 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 17 23:26:54.075525 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:26:54.089744 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:26:54.091356 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:26:54.105446 kernel: BTRFS info (device dm-0): first mount of filesystem 6218981f-ef91-4196-be05-d5f6a224b350 Apr 17 23:26:54.105536 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:26:54.105563 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:26:54.105587 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:26:54.106328 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:26:54.113083 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:26:54.114974 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:26:54.116595 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:26:54.130466 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:26:54.133219 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:26:54.147576 kernel: BTRFS info (device sda6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:26:54.147617 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:26:54.147628 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:26:54.154081 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:26:54.154129 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:26:54.166706 kernel: BTRFS info (device sda6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:26:54.166390 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:26:54.174577 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:26:54.183721 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:26:54.278866 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:26:54.282651 ignition[684]: Ignition 2.19.0 Apr 17 23:26:54.282662 ignition[684]: Stage: fetch-offline Apr 17 23:26:54.285432 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:26:54.282712 ignition[684]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:26:54.286192 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:26:54.282720 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:26:54.282866 ignition[684]: parsed url from cmdline: "" Apr 17 23:26:54.282870 ignition[684]: no config URL provided Apr 17 23:26:54.282874 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:26:54.282881 ignition[684]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:26:54.282885 ignition[684]: failed to fetch config: resource requires networking Apr 17 23:26:54.283407 ignition[684]: Ignition finished successfully Apr 17 23:26:54.305708 systemd-networkd[779]: lo: Link UP Apr 17 23:26:54.305722 systemd-networkd[779]: lo: Gained carrier Apr 17 23:26:54.307297 systemd-networkd[779]: Enumeration completed Apr 17 23:26:54.307399 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:26:54.308503 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:26:54.308506 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:26:54.309365 systemd[1]: Reached target network.target - Network. Apr 17 23:26:54.310257 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:26:54.310260 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:26:54.310720 systemd-networkd[779]: eth0: Link UP Apr 17 23:26:54.310723 systemd-networkd[779]: eth0: Gained carrier Apr 17 23:26:54.310730 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:26:54.317308 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:26:54.318526 systemd-networkd[779]: eth1: Link UP Apr 17 23:26:54.318529 systemd-networkd[779]: eth1: Gained carrier Apr 17 23:26:54.318537 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:26:54.333114 ignition[782]: Ignition 2.19.0 Apr 17 23:26:54.333130 ignition[782]: Stage: fetch Apr 17 23:26:54.333296 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:26:54.333305 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:26:54.333387 ignition[782]: parsed url from cmdline: "" Apr 17 23:26:54.333390 ignition[782]: no config URL provided Apr 17 23:26:54.333394 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:26:54.333401 ignition[782]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:26:54.333417 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 17 23:26:54.334125 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:26:54.364165 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 17 23:26:54.381168 systemd-networkd[779]: eth0: DHCPv4 address 142.132.185.111/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 17 23:26:54.535074 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 17 23:26:54.543108 ignition[782]: GET result: OK Apr 17 23:26:54.543339 ignition[782]: parsing config with SHA512: 528f92748bd8853f1da0f22162af45f0ec1a20be3386750ae542d3fc0993a880fef8af61ba0ef6613321e885dce63f2c16a9460f35349b113eb26e757957d9a2 Apr 17 23:26:54.548895 unknown[782]: fetched base config from "system" Apr 17 23:26:54.548909 unknown[782]: fetched base config from "system" Apr 17 23:26:54.549340 ignition[782]: fetch: fetch complete Apr 17 23:26:54.548914 unknown[782]: fetched user config from "hetzner" Apr 17 23:26:54.549345 ignition[782]: fetch: fetch passed Apr 17 23:26:54.552641 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:26:54.549394 ignition[782]: Ignition finished successfully Apr 17 23:26:54.562377 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:26:54.577455 ignition[789]: Ignition 2.19.0 Apr 17 23:26:54.577472 ignition[789]: Stage: kargs Apr 17 23:26:54.577678 ignition[789]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:26:54.577688 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:26:54.578833 ignition[789]: kargs: kargs passed Apr 17 23:26:54.578890 ignition[789]: Ignition finished successfully Apr 17 23:26:54.583106 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:26:54.596352 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:26:54.610220 ignition[795]: Ignition 2.19.0 Apr 17 23:26:54.610241 ignition[795]: Stage: disks Apr 17 23:26:54.610488 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:26:54.610502 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:26:54.611689 ignition[795]: disks: disks passed Apr 17 23:26:54.611740 ignition[795]: Ignition finished successfully Apr 17 23:26:54.616649 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:26:54.617911 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:26:54.619379 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:26:54.620640 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:26:54.621692 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:26:54.622593 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:26:54.629265 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:26:54.648824 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 17 23:26:54.653938 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:26:54.661285 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:26:54.712080 kernel: EXT4-fs (sda9): mounted filesystem 2a4b2d55-130a-4cda-bef1-b1e6ed7bcf6b r/w with ordered data mode. Quota mode: none. Apr 17 23:26:54.712425 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:26:54.714593 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:26:54.727257 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:26:54.732266 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:26:54.733937 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 17 23:26:54.736212 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:26:54.736241 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:26:54.741919 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:26:54.745164 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:26:54.748086 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (811) Apr 17 23:26:54.753564 kernel: BTRFS info (device sda6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:26:54.753620 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:26:54.755087 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:26:54.759131 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:26:54.759171 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:26:54.762511 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:26:54.802735 coreos-metadata[813]: Apr 17 23:26:54.802 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 17 23:26:54.804683 coreos-metadata[813]: Apr 17 23:26:54.804 INFO Fetch successful Apr 17 23:26:54.807910 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:26:54.811150 coreos-metadata[813]: Apr 17 23:26:54.808 INFO wrote hostname ci-4081-3-6-n-ddb46eeabf to /sysroot/etc/hostname Apr 17 23:26:54.810010 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:26:54.815009 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:26:54.821135 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:26:54.826562 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:26:54.927926 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:26:54.941365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:26:54.946416 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:26:54.957146 kernel: BTRFS info (device sda6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:26:54.981403 ignition[929]: INFO : Ignition 2.19.0 Apr 17 23:26:54.983108 ignition[929]: INFO : Stage: mount Apr 17 23:26:54.983108 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:26:54.983108 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:26:54.986164 ignition[929]: INFO : mount: mount passed Apr 17 23:26:54.986164 ignition[929]: INFO : Ignition finished successfully Apr 17 23:26:54.986671 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:26:54.988204 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:26:54.994232 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:26:55.106236 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:26:55.119540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:26:55.131082 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Apr 17 23:26:55.132824 kernel: BTRFS info (device sda6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:26:55.132858 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:26:55.132870 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:26:55.136072 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:26:55.136125 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:26:55.140333 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:26:55.168212 ignition[957]: INFO : Ignition 2.19.0 Apr 17 23:26:55.168938 ignition[957]: INFO : Stage: files Apr 17 23:26:55.169485 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:26:55.169485 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:26:55.171076 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:26:55.171076 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:26:55.171076 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:26:55.174511 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:26:55.175405 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:26:55.176731 unknown[957]: wrote ssh authorized keys file for user: core Apr 17 23:26:55.178226 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:26:55.179303 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:26:55.180419 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 17 23:26:55.279858 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:26:55.411169 systemd-networkd[779]: eth0: Gained IPv6LL Apr 17 23:26:55.492087 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:26:55.492087 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 17 23:26:55.494728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 17 23:26:55.819394 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:26:56.243365 systemd-networkd[779]: eth1: Gained IPv6LL Apr 17 23:26:56.369264 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 17 23:26:56.369264 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:26:56.375170 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:26:56.375170 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:26:56.375170 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:26:56.375170 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 23:26:56.375170 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:26:56.375170 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:26:56.375170 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 23:26:56.375170 ignition[957]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:26:56.375170 ignition[957]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:26:56.375170 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:26:56.375170 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:26:56.375170 ignition[957]: INFO : files: files passed Apr 17 23:26:56.375170 ignition[957]: INFO : Ignition finished successfully Apr 17 23:26:56.373696 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:26:56.382447 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:26:56.386252 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:26:56.390246 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:26:56.390819 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:26:56.402921 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:26:56.402921 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:26:56.405749 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:26:56.406948 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:26:56.408080 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:26:56.421221 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:26:56.460368 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:26:56.462100 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:26:56.463815 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:26:56.466806 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:26:56.469810 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:26:56.487456 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:26:56.505232 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:26:56.518448 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:26:56.533429 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:26:56.534841 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:26:56.536451 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:26:56.537231 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:26:56.537378 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:26:56.539049 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:26:56.540500 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:26:56.541674 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:26:56.542761 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:26:56.543935 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:26:56.545039 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:26:56.546080 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:26:56.547221 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:26:56.548383 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:26:56.549372 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:26:56.550194 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:26:56.550368 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:26:56.551576 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:26:56.552748 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:26:56.553789 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:26:56.554271 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:26:56.555044 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:26:56.555227 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:26:56.556799 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:26:56.557013 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:26:56.558077 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:26:56.558226 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:26:56.559092 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 17 23:26:56.559268 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:26:56.570019 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:26:56.570746 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:26:56.570905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:26:56.574278 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:26:56.577160 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:26:56.577327 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:26:56.578120 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:26:56.579726 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:26:56.592820 ignition[1009]: INFO : Ignition 2.19.0 Apr 17 23:26:56.592820 ignition[1009]: INFO : Stage: umount Apr 17 23:26:56.592820 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:26:56.592820 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:26:56.592820 ignition[1009]: INFO : umount: umount passed Apr 17 23:26:56.592820 ignition[1009]: INFO : Ignition finished successfully Apr 17 23:26:56.591523 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:26:56.591636 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:26:56.593382 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:26:56.593761 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:26:56.595301 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:26:56.595386 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:26:56.597203 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:26:56.597273 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:26:56.598017 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:26:56.598052 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:26:56.599870 systemd[1]: Stopped target network.target - Network. Apr 17 23:26:56.600473 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:26:56.600531 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:26:56.601774 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:26:56.603759 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:26:56.605391 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:26:56.606049 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:26:56.606981 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:26:56.609469 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:26:56.609513 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:26:56.610432 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:26:56.610472 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:26:56.611511 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:26:56.611559 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:26:56.612643 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:26:56.612685 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:26:56.613952 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:26:56.614868 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:26:56.617214 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:26:56.617705 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:26:56.617788 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:26:56.619413 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:26:56.619478 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:26:56.622285 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:26:56.622380 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:26:56.623704 systemd-networkd[779]: eth1: DHCPv6 lease lost Apr 17 23:26:56.625141 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:26:56.625225 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:26:56.627122 systemd-networkd[779]: eth0: DHCPv6 lease lost Apr 17 23:26:56.628963 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:26:56.629100 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:26:56.630364 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:26:56.630415 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:26:56.638301 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:26:56.638871 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:26:56.638980 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:26:56.639841 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:26:56.639884 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:26:56.643181 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:26:56.643237 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:26:56.643962 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:26:56.656417 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:26:56.656513 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:26:56.668495 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:26:56.668802 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:26:56.672435 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:26:56.672487 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:26:56.673283 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:26:56.673324 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:26:56.674015 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:26:56.675103 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:26:56.676990 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:26:56.677041 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:26:56.678619 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:26:56.678672 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:26:56.683264 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:26:56.683814 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:26:56.683868 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:26:56.686787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:26:56.686849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:26:56.702913 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:26:56.703134 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:26:56.704825 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:26:56.709289 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:26:56.719379 systemd[1]: Switching root. Apr 17 23:26:56.751507 systemd-journald[237]: Journal stopped Apr 17 23:26:57.642419 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 17 23:26:57.642510 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:26:57.642536 kernel: SELinux: policy capability open_perms=1 Apr 17 23:26:57.642548 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:26:57.642558 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:26:57.642572 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:26:57.642582 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:26:57.642592 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:26:57.642602 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:26:57.642612 kernel: audit: type=1403 audit(1776468416.904:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:26:57.642623 systemd[1]: Successfully loaded SELinux policy in 35.005ms. Apr 17 23:26:57.642645 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.126ms. Apr 17 23:26:57.642678 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:26:57.642694 systemd[1]: Detected virtualization kvm. Apr 17 23:26:57.642705 systemd[1]: Detected architecture arm64. Apr 17 23:26:57.642717 systemd[1]: Detected first boot. Apr 17 23:26:57.642727 systemd[1]: Hostname set to . Apr 17 23:26:57.642738 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:26:57.642749 zram_generator::config[1051]: No configuration found. Apr 17 23:26:57.642764 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:26:57.642776 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:26:57.642787 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:26:57.642800 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:26:57.642815 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:26:57.642826 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:26:57.642837 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:26:57.642848 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:26:57.642861 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:26:57.642872 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:26:57.642884 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:26:57.642894 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:26:57.642933 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:26:57.642949 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:26:57.642972 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:26:57.642991 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:26:57.643002 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:26:57.643020 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:26:57.643033 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 17 23:26:57.643046 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:26:57.643068 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:26:57.643081 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:26:57.643103 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:26:57.643118 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:26:57.643128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:26:57.643139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:26:57.643149 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:26:57.643160 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:26:57.643171 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:26:57.643193 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:26:57.643207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:26:57.643218 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:26:57.643229 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:26:57.643242 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:26:57.643252 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:26:57.643263 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:26:57.643273 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:26:57.643288 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:26:57.643298 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:26:57.643309 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:26:57.643320 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:26:57.643332 systemd[1]: Reached target machines.target - Containers. Apr 17 23:26:57.643342 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:26:57.643353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:26:57.643363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:26:57.643377 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:26:57.643390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:26:57.643403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:26:57.643414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:26:57.643424 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:26:57.643435 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:26:57.643446 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:26:57.643456 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:26:57.643467 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:26:57.643477 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:26:57.643489 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:26:57.643499 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:26:57.643510 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:26:57.643520 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:26:57.643531 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:26:57.643542 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:26:57.643552 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:26:57.643562 systemd[1]: Stopped verity-setup.service. Apr 17 23:26:57.643573 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:26:57.643585 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:26:57.643596 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:26:57.643606 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:26:57.643617 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:26:57.643627 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:26:57.643639 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:26:57.643650 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:26:57.643661 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:26:57.643672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:26:57.643682 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:26:57.643693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:26:57.643703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:26:57.643733 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:26:57.643751 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:26:57.643764 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:26:57.643775 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:26:57.643785 kernel: fuse: init (API version 7.39) Apr 17 23:26:57.643795 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:26:57.643806 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:26:57.643818 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:26:57.643830 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:26:57.643841 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:26:57.643851 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:26:57.643863 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:26:57.643873 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:26:57.643884 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:26:57.643933 systemd-journald[1118]: Collecting audit messages is disabled. Apr 17 23:26:57.643960 systemd-journald[1118]: Journal started Apr 17 23:26:57.643983 systemd-journald[1118]: Runtime Journal (/run/log/journal/4ee20eafb15b4f0fab7ef147dbe8eb91) is 8.0M, max 76.6M, 68.6M free. Apr 17 23:26:57.382418 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:26:57.400784 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 23:26:57.401317 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:26:57.653282 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:26:57.653336 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:26:57.655492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:26:57.661744 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:26:57.664074 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:26:57.668316 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:26:57.689075 kernel: loop: module loaded Apr 17 23:26:57.689133 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:26:57.707880 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:26:57.692447 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:26:57.695128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:26:57.695923 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:26:57.696896 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:26:57.714118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:26:57.721775 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:26:57.725473 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:26:57.740048 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:26:57.746239 kernel: loop0: detected capacity change from 0 to 114432 Apr 17 23:26:57.757033 kernel: ACPI: bus type drm_connector registered Apr 17 23:26:57.753346 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:26:57.755743 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:26:57.758574 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:26:57.763313 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:26:57.769251 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:26:57.765494 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:26:57.765639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:26:57.794107 kernel: loop1: detected capacity change from 0 to 197488 Apr 17 23:26:57.797812 systemd-journald[1118]: Time spent on flushing to /var/log/journal/4ee20eafb15b4f0fab7ef147dbe8eb91 is 57.484ms for 1131 entries. Apr 17 23:26:57.797812 systemd-journald[1118]: System Journal (/var/log/journal/4ee20eafb15b4f0fab7ef147dbe8eb91) is 8.0M, max 584.8M, 576.8M free. Apr 17 23:26:57.866579 systemd-journald[1118]: Received client request to flush runtime journal. Apr 17 23:26:57.866623 kernel: loop2: detected capacity change from 0 to 114328 Apr 17 23:26:57.832434 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:26:57.835204 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:26:57.849516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:26:57.859843 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:26:57.868425 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:26:57.877125 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:26:57.889554 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:26:57.892638 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:26:57.901108 kernel: loop3: detected capacity change from 0 to 8 Apr 17 23:26:57.927663 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 17 23:26:57.929387 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Apr 17 23:26:57.933019 kernel: loop4: detected capacity change from 0 to 114432 Apr 17 23:26:57.945475 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:26:57.949086 kernel: loop5: detected capacity change from 0 to 197488 Apr 17 23:26:57.972361 kernel: loop6: detected capacity change from 0 to 114328 Apr 17 23:26:57.990319 kernel: loop7: detected capacity change from 0 to 8 Apr 17 23:26:57.990864 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 17 23:26:57.992697 (sd-merge)[1190]: Merged extensions into '/usr'. Apr 17 23:26:58.003532 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:26:58.003948 systemd[1]: Reloading... Apr 17 23:26:58.150226 zram_generator::config[1219]: No configuration found. Apr 17 23:26:58.235104 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:26:58.287943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:26:58.334560 systemd[1]: Reloading finished in 329 ms. Apr 17 23:26:58.361402 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:26:58.362662 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:26:58.373746 systemd[1]: Starting ensure-sysext.service... Apr 17 23:26:58.376218 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:26:58.389434 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:26:58.389453 systemd[1]: Reloading... Apr 17 23:26:58.421337 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:26:58.421947 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:26:58.422783 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:26:58.424421 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Apr 17 23:26:58.424559 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Apr 17 23:26:58.428223 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:26:58.428325 systemd-tmpfiles[1256]: Skipping /boot Apr 17 23:26:58.438317 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:26:58.438427 systemd-tmpfiles[1256]: Skipping /boot Apr 17 23:26:58.477082 zram_generator::config[1283]: No configuration found. Apr 17 23:26:58.573132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:26:58.619295 systemd[1]: Reloading finished in 229 ms. Apr 17 23:26:58.635551 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:26:58.637290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:26:58.656527 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:26:58.664242 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:26:58.666976 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:26:58.677342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:26:58.682184 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:26:58.686502 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:26:58.689499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:26:58.694574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:26:58.702186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:26:58.707050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:26:58.709019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:26:58.712376 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:26:58.718803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:26:58.720302 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:26:58.724133 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:26:58.728070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:26:58.731306 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:26:58.732075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:26:58.733109 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:26:58.742159 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:26:58.743359 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:26:58.743521 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:26:58.745546 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:26:58.746310 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:26:58.748592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:26:58.748714 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:26:58.751714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:26:58.751853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:26:58.760769 systemd[1]: Finished ensure-sysext.service. Apr 17 23:26:58.765660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:26:58.766209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:26:58.776244 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:26:58.777268 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:26:58.779795 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:26:58.782727 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Apr 17 23:26:58.797305 augenrules[1358]: No rules Apr 17 23:26:58.798010 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:26:58.800752 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:26:58.815008 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:26:58.826528 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:26:58.828850 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:26:58.926483 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:26:58.926853 systemd-networkd[1366]: lo: Link UP Apr 17 23:26:58.926857 systemd-networkd[1366]: lo: Gained carrier Apr 17 23:26:58.927231 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:26:58.929169 systemd-networkd[1366]: Enumeration completed Apr 17 23:26:58.929281 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:26:58.938730 systemd-resolved[1330]: Positive Trust Anchors: Apr 17 23:26:58.938754 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:26:58.938788 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:26:58.945614 systemd-resolved[1330]: Using system hostname 'ci-4081-3-6-n-ddb46eeabf'. Apr 17 23:26:58.947809 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:26:58.948668 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:26:58.949773 systemd[1]: Reached target network.target - Network. Apr 17 23:26:58.950356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:26:58.988967 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 17 23:26:59.044096 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:26:59.088054 systemd-networkd[1366]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:26:59.088547 systemd-networkd[1366]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:26:59.089233 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 17 23:26:59.089345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:26:59.090220 systemd-networkd[1366]: eth1: Link UP Apr 17 23:26:59.090231 systemd-networkd[1366]: eth1: Gained carrier Apr 17 23:26:59.090247 systemd-networkd[1366]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:26:59.095280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:26:59.109746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:26:59.114261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:26:59.116214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:26:59.116256 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:26:59.117106 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 17 23:26:59.117145 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 17 23:26:59.117157 kernel: [drm] features: -context_init Apr 17 23:26:59.120492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:26:59.120685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:26:59.121660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:26:59.121786 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:26:59.122684 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:26:59.122698 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:26:59.123364 systemd-networkd[1366]: eth0: Link UP Apr 17 23:26:59.123374 systemd-networkd[1366]: eth0: Gained carrier Apr 17 23:26:59.123389 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:26:59.126486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:26:59.131248 kernel: [drm] number of scanouts: 1 Apr 17 23:26:59.131307 kernel: [drm] number of cap sets: 0 Apr 17 23:26:59.133631 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:26:59.133834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:26:59.134070 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1381) Apr 17 23:26:59.136186 systemd-networkd[1366]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 17 23:26:59.136817 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Apr 17 23:26:59.138997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:26:59.144302 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 17 23:26:59.183025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:26:59.185357 kernel: Console: switching to colour frame buffer device 160x50 Apr 17 23:26:59.192092 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 17 23:26:59.192583 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:26:59.193260 systemd-networkd[1366]: eth0: DHCPv4 address 142.132.185.111/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 17 23:26:59.193586 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Apr 17 23:26:59.196614 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Apr 17 23:26:59.225370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:26:59.239166 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:26:59.241657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:26:59.242123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:26:59.253724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:26:59.319214 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:26:59.389045 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:26:59.396370 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:26:59.411611 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:26:59.441263 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:26:59.443030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:26:59.445903 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:26:59.446772 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:26:59.447610 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:26:59.448530 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:26:59.449263 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:26:59.449946 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:26:59.450767 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:26:59.450811 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:26:59.451382 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:26:59.452758 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:26:59.454948 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:26:59.461237 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:26:59.464173 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:26:59.465298 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:26:59.466123 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:26:59.466652 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:26:59.467258 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:26:59.467294 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:26:59.470224 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:26:59.474339 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:26:59.475544 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:26:59.476018 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:26:59.480970 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:26:59.485597 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:26:59.488201 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:26:59.494286 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:26:59.499191 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:26:59.503976 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 17 23:26:59.509992 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:26:59.516686 dbus-daemon[1443]: [system] SELinux support is enabled Apr 17 23:26:59.517617 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:26:59.523240 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:26:59.526833 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:26:59.527311 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:26:59.530222 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:26:59.537065 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:26:59.538190 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:26:59.543536 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:26:59.549910 coreos-metadata[1442]: Apr 17 23:26:59.549 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 17 23:26:59.549910 coreos-metadata[1442]: Apr 17 23:26:59.549 INFO Fetch successful Apr 17 23:26:59.549910 coreos-metadata[1442]: Apr 17 23:26:59.549 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 17 23:26:59.549910 coreos-metadata[1442]: Apr 17 23:26:59.549 INFO Fetch successful Apr 17 23:26:59.552842 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:26:59.552909 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:26:59.553829 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:26:59.553849 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:26:59.564525 jq[1444]: false Apr 17 23:26:59.570374 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:26:59.570568 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:26:59.577985 jq[1455]: true Apr 17 23:26:59.592942 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:26:59.593608 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:26:59.604120 tar[1460]: linux-arm64/LICENSE Apr 17 23:26:59.604532 tar[1460]: linux-arm64/helm Apr 17 23:26:59.613965 update_engine[1454]: I20260417 23:26:59.613749 1454 main.cc:92] Flatcar Update Engine starting Apr 17 23:26:59.616116 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:26:59.616287 update_engine[1454]: I20260417 23:26:59.616251 1454 update_check_scheduler.cc:74] Next update check in 10m36s Apr 17 23:26:59.618527 extend-filesystems[1445]: Found loop4 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found loop5 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found loop6 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found loop7 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found sda Apr 17 23:26:59.619703 extend-filesystems[1445]: Found sda1 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found sda2 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found sda3 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found usr Apr 17 23:26:59.619703 extend-filesystems[1445]: Found sda4 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found sda6 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found sda7 Apr 17 23:26:59.619703 extend-filesystems[1445]: Found sda9 Apr 17 23:26:59.619703 extend-filesystems[1445]: Checking size of /dev/sda9 Apr 17 23:26:59.624230 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:26:59.637715 jq[1470]: true Apr 17 23:26:59.628547 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:26:59.645666 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:26:59.645869 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:26:59.670187 extend-filesystems[1445]: Resized partition /dev/sda9 Apr 17 23:26:59.679004 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:26:59.691367 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 17 23:26:59.727585 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:26:59.740322 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:26:59.752938 systemd-logind[1453]: New seat seat0. Apr 17 23:26:59.762553 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (Power Button) Apr 17 23:26:59.762573 systemd-logind[1453]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 17 23:26:59.763507 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:26:59.786966 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:26:59.792943 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:26:59.841539 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1367) Apr 17 23:26:59.848623 systemd[1]: Starting sshkeys.service... Apr 17 23:26:59.856318 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 17 23:26:59.890230 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:26:59.891225 extend-filesystems[1494]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 17 23:26:59.891225 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 17 23:26:59.891225 extend-filesystems[1494]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 17 23:26:59.899999 extend-filesystems[1445]: Resized filesystem in /dev/sda9 Apr 17 23:26:59.899999 extend-filesystems[1445]: Found sr0 Apr 17 23:26:59.903578 containerd[1473]: time="2026-04-17T23:26:59.901321560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:26:59.904673 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:26:59.908790 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:26:59.910140 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:26:59.966444 containerd[1473]: time="2026-04-17T23:26:59.962532080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:26:59.967739 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:26:59.969179 coreos-metadata[1521]: Apr 17 23:26:59.969 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 17 23:26:59.972262 coreos-metadata[1521]: Apr 17 23:26:59.971 INFO Fetch successful Apr 17 23:26:59.974317 containerd[1473]: time="2026-04-17T23:26:59.974272920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:26:59.974421 containerd[1473]: time="2026-04-17T23:26:59.974405200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:26:59.974485 containerd[1473]: time="2026-04-17T23:26:59.974462400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:26:59.979821 containerd[1473]: time="2026-04-17T23:26:59.979771640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:26:59.980069 containerd[1473]: time="2026-04-17T23:26:59.980025040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:26:59.980640 unknown[1521]: wrote ssh authorized keys file for user: core Apr 17 23:26:59.982023 containerd[1473]: time="2026-04-17T23:26:59.981818240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:26:59.982023 containerd[1473]: time="2026-04-17T23:26:59.981940040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:26:59.983649 containerd[1473]: time="2026-04-17T23:26:59.982553040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:26:59.983649 containerd[1473]: time="2026-04-17T23:26:59.982596640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:26:59.983649 containerd[1473]: time="2026-04-17T23:26:59.982628440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:26:59.983649 containerd[1473]: time="2026-04-17T23:26:59.982651000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:26:59.983649 containerd[1473]: time="2026-04-17T23:26:59.982797520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:26:59.985139 containerd[1473]: time="2026-04-17T23:26:59.984577560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:26:59.987668 containerd[1473]: time="2026-04-17T23:26:59.987279800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:26:59.987668 containerd[1473]: time="2026-04-17T23:26:59.987355440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:26:59.987668 containerd[1473]: time="2026-04-17T23:26:59.987462200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:26:59.987668 containerd[1473]: time="2026-04-17T23:26:59.987509520Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:26:59.997285 containerd[1473]: time="2026-04-17T23:26:59.997246600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:26:59.999082 containerd[1473]: time="2026-04-17T23:26:59.998927360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:26:59.999082 containerd[1473]: time="2026-04-17T23:26:59.998964400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:26:59.999082 containerd[1473]: time="2026-04-17T23:26:59.998991000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:26:59.999082 containerd[1473]: time="2026-04-17T23:26:59.999009480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999367360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999630000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999737120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999762720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999779800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999794360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999817120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999829240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999842520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999856840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999885720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999902440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999914440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:27:00.001112 containerd[1473]: time="2026-04-17T23:26:59.999936520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.001420 containerd[1473]: time="2026-04-17T23:26:59.999951920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.001420 containerd[1473]: time="2026-04-17T23:26:59.999964800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.001420 containerd[1473]: time="2026-04-17T23:26:59.999978200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.001420 containerd[1473]: time="2026-04-17T23:26:59.999990400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.001420 containerd[1473]: time="2026-04-17T23:27:00.000011800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.001420 containerd[1473]: time="2026-04-17T23:27:00.000024480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.001420 containerd[1473]: time="2026-04-17T23:27:00.000037600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.001420 containerd[1473]: time="2026-04-17T23:27:00.000050520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002361720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002396640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002413040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002427760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002460560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002486800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002499320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002509840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002629800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002648800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002659960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002672680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:27:00.003345 containerd[1473]: time="2026-04-17T23:27:00.002682040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.003607 containerd[1473]: time="2026-04-17T23:27:00.002694720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:27:00.003607 containerd[1473]: time="2026-04-17T23:27:00.002704440Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:27:00.003607 containerd[1473]: time="2026-04-17T23:27:00.002716520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:27:00.005232 containerd[1473]: time="2026-04-17T23:27:00.005156200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.005783400Z" level=info msg="Connect containerd service" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.005832800Z" level=info msg="using legacy CRI server" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.005841600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.005978800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.006665760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.006923920Z" level=info msg="Start subscribing containerd event" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.006991120Z" level=info msg="Start recovering state" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.007106640Z" level=info msg="Start event monitor" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.007123080Z" level=info msg="Start snapshots syncer" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.007135960Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:27:00.007720 containerd[1473]: time="2026-04-17T23:27:00.007144720Z" level=info msg="Start streaming server" Apr 17 23:27:00.009696 containerd[1473]: time="2026-04-17T23:27:00.009675560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:27:00.010470 containerd[1473]: time="2026-04-17T23:27:00.010440200Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:27:00.010595 containerd[1473]: time="2026-04-17T23:27:00.010581600Z" level=info msg="containerd successfully booted in 0.117997s" Apr 17 23:27:00.010654 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:27:00.022623 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:27:00.024458 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:27:00.030427 systemd[1]: Finished sshkeys.service. Apr 17 23:27:00.344360 tar[1460]: linux-arm64/README.md Apr 17 23:27:00.359141 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:27:00.403433 systemd-networkd[1366]: eth0: Gained IPv6LL Apr 17 23:27:00.404367 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Apr 17 23:27:00.407624 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:27:00.411689 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:27:00.423388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:27:00.426758 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:27:00.472872 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:27:00.980300 systemd-networkd[1366]: eth1: Gained IPv6LL Apr 17 23:27:00.984158 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Apr 17 23:27:01.135939 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:27:01.161205 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:27:01.169646 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:27:01.179737 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:27:01.181504 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:27:01.192590 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:27:01.195741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:01.208451 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:27:01.208981 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:27:01.218696 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:27:01.221516 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 17 23:27:01.222832 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:27:01.223735 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:27:01.224781 systemd[1]: Startup finished in 763ms (kernel) + 5.206s (initrd) + 4.355s (userspace) = 10.325s. Apr 17 23:27:01.665405 kubelet[1567]: E0417 23:27:01.665314 1567 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:27:01.668213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:27:01.668515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:27:05.053960 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:27:05.060575 systemd[1]: Started sshd@0-142.132.185.111:22-50.85.169.122:53322.service - OpenSSH per-connection server daemon (50.85.169.122:53322). Apr 17 23:27:05.191009 sshd[1583]: Accepted publickey for core from 50.85.169.122 port 53322 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:05.194227 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:05.204724 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:27:05.221619 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:27:05.225975 systemd-logind[1453]: New session 1 of user core. Apr 17 23:27:05.239096 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:27:05.245746 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:27:05.263302 (systemd)[1587]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:27:05.380586 systemd[1587]: Queued start job for default target default.target. Apr 17 23:27:05.390222 systemd[1587]: Created slice app.slice - User Application Slice. Apr 17 23:27:05.390281 systemd[1587]: Reached target paths.target - Paths. Apr 17 23:27:05.390308 systemd[1587]: Reached target timers.target - Timers. Apr 17 23:27:05.392610 systemd[1587]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:27:05.407621 systemd[1587]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:27:05.407742 systemd[1587]: Reached target sockets.target - Sockets. Apr 17 23:27:05.407757 systemd[1587]: Reached target basic.target - Basic System. Apr 17 23:27:05.407819 systemd[1587]: Reached target default.target - Main User Target. Apr 17 23:27:05.407851 systemd[1587]: Startup finished in 137ms. Apr 17 23:27:05.407947 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:27:05.416455 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:27:05.541494 systemd[1]: Started sshd@1-142.132.185.111:22-50.85.169.122:53328.service - OpenSSH per-connection server daemon (50.85.169.122:53328). Apr 17 23:27:05.662312 sshd[1598]: Accepted publickey for core from 50.85.169.122 port 53328 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:05.664794 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:05.670202 systemd-logind[1453]: New session 2 of user core. Apr 17 23:27:05.678596 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:27:05.778648 sshd[1598]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:05.784688 systemd[1]: sshd@1-142.132.185.111:22-50.85.169.122:53328.service: Deactivated successfully. Apr 17 23:27:05.789335 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:27:05.790170 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:27:05.791536 systemd-logind[1453]: Removed session 2. Apr 17 23:27:05.813344 systemd[1]: Started sshd@2-142.132.185.111:22-50.85.169.122:53336.service - OpenSSH per-connection server daemon (50.85.169.122:53336). Apr 17 23:27:05.948977 sshd[1605]: Accepted publickey for core from 50.85.169.122 port 53336 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:05.950366 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:05.955417 systemd-logind[1453]: New session 3 of user core. Apr 17 23:27:05.963388 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:27:06.061926 sshd[1605]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:06.066639 systemd[1]: sshd@2-142.132.185.111:22-50.85.169.122:53336.service: Deactivated successfully. Apr 17 23:27:06.068702 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:27:06.070531 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:27:06.071822 systemd-logind[1453]: Removed session 3. Apr 17 23:27:06.101610 systemd[1]: Started sshd@3-142.132.185.111:22-50.85.169.122:53352.service - OpenSSH per-connection server daemon (50.85.169.122:53352). Apr 17 23:27:06.225288 sshd[1612]: Accepted publickey for core from 50.85.169.122 port 53352 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:06.227538 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:06.233119 systemd-logind[1453]: New session 4 of user core. Apr 17 23:27:06.240302 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:27:06.342350 sshd[1612]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:06.346999 systemd[1]: sshd@3-142.132.185.111:22-50.85.169.122:53352.service: Deactivated successfully. Apr 17 23:27:06.348921 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:27:06.350205 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:27:06.351411 systemd-logind[1453]: Removed session 4. Apr 17 23:27:06.371368 systemd[1]: Started sshd@4-142.132.185.111:22-50.85.169.122:53354.service - OpenSSH per-connection server daemon (50.85.169.122:53354). Apr 17 23:27:06.487167 sshd[1619]: Accepted publickey for core from 50.85.169.122 port 53354 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:06.488662 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:06.494099 systemd-logind[1453]: New session 5 of user core. Apr 17 23:27:06.503424 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:27:06.595090 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:27:06.595395 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:27:06.614281 sudo[1622]: pam_unix(sudo:session): session closed for user root Apr 17 23:27:06.630415 sshd[1619]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:06.636394 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:27:06.636553 systemd[1]: sshd@4-142.132.185.111:22-50.85.169.122:53354.service: Deactivated successfully. Apr 17 23:27:06.638880 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:27:06.639980 systemd-logind[1453]: Removed session 5. Apr 17 23:27:06.653720 systemd[1]: Started sshd@5-142.132.185.111:22-50.85.169.122:53356.service - OpenSSH per-connection server daemon (50.85.169.122:53356). Apr 17 23:27:06.773103 sshd[1627]: Accepted publickey for core from 50.85.169.122 port 53356 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:06.776049 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:06.780712 systemd-logind[1453]: New session 6 of user core. Apr 17 23:27:06.787403 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:27:06.872114 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:27:06.872422 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:27:06.877030 sudo[1631]: pam_unix(sudo:session): session closed for user root Apr 17 23:27:06.883526 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:27:06.883935 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:27:06.899555 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:27:06.903778 auditctl[1634]: No rules Apr 17 23:27:06.904141 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:27:06.904339 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:27:06.913041 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:27:06.940462 augenrules[1652]: No rules Apr 17 23:27:06.944085 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:27:06.945430 sudo[1630]: pam_unix(sudo:session): session closed for user root Apr 17 23:27:06.960845 sshd[1627]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:06.965082 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:27:06.965412 systemd[1]: sshd@5-142.132.185.111:22-50.85.169.122:53356.service: Deactivated successfully. Apr 17 23:27:06.968188 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:27:06.970858 systemd-logind[1453]: Removed session 6. Apr 17 23:27:07.000608 systemd[1]: Started sshd@6-142.132.185.111:22-50.85.169.122:53372.service - OpenSSH per-connection server daemon (50.85.169.122:53372). Apr 17 23:27:07.127568 sshd[1660]: Accepted publickey for core from 50.85.169.122 port 53372 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:27:07.129940 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:27:07.135097 systemd-logind[1453]: New session 7 of user core. Apr 17 23:27:07.145369 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:27:07.231111 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:27:07.231398 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:27:07.530436 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:27:07.532353 (dockerd)[1678]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:27:07.786137 dockerd[1678]: time="2026-04-17T23:27:07.784230000Z" level=info msg="Starting up" Apr 17 23:27:07.871765 systemd[1]: var-lib-docker-metacopy\x2dcheck2488233172-merged.mount: Deactivated successfully. Apr 17 23:27:07.880859 dockerd[1678]: time="2026-04-17T23:27:07.880735640Z" level=info msg="Loading containers: start." Apr 17 23:27:07.990098 kernel: Initializing XFRM netlink socket Apr 17 23:27:08.012732 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Apr 17 23:27:08.022497 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Apr 17 23:27:08.070833 systemd-networkd[1366]: docker0: Link UP Apr 17 23:27:08.071526 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Apr 17 23:27:08.092327 dockerd[1678]: time="2026-04-17T23:27:08.092238160Z" level=info msg="Loading containers: done." Apr 17 23:27:08.110428 dockerd[1678]: time="2026-04-17T23:27:08.110354320Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:27:08.110649 dockerd[1678]: time="2026-04-17T23:27:08.110503840Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:27:08.110697 dockerd[1678]: time="2026-04-17T23:27:08.110678920Z" level=info msg="Daemon has completed initialization" Apr 17 23:27:08.151081 dockerd[1678]: time="2026-04-17T23:27:08.150816800Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:27:08.151336 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:27:08.609678 containerd[1473]: time="2026-04-17T23:27:08.609490320Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 17 23:27:08.860522 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2387115321-merged.mount: Deactivated successfully. Apr 17 23:27:09.164398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999860942.mount: Deactivated successfully. Apr 17 23:27:10.061092 containerd[1473]: time="2026-04-17T23:27:10.060865800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:10.063329 containerd[1473]: time="2026-04-17T23:27:10.062976800Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=24608883" Apr 17 23:27:10.063329 containerd[1473]: time="2026-04-17T23:27:10.063270560Z" level=info msg="ImageCreate event name:\"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:10.066784 containerd[1473]: time="2026-04-17T23:27:10.066690600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:10.069105 containerd[1473]: time="2026-04-17T23:27:10.068214960Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"24605384\" in 1.45867524s" Apr 17 23:27:10.069105 containerd[1473]: time="2026-04-17T23:27:10.068255360Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\"" Apr 17 23:27:10.069633 containerd[1473]: time="2026-04-17T23:27:10.069399480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 17 23:27:11.123773 containerd[1473]: time="2026-04-17T23:27:11.123601840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:11.125805 containerd[1473]: time="2026-04-17T23:27:11.125766360Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=19073314" Apr 17 23:27:11.127049 containerd[1473]: time="2026-04-17T23:27:11.126614440Z" level=info msg="ImageCreate event name:\"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:11.132997 containerd[1473]: time="2026-04-17T23:27:11.132937000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:11.135726 containerd[1473]: time="2026-04-17T23:27:11.135679800Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"20579933\" in 1.06624692s" Apr 17 23:27:11.135848 containerd[1473]: time="2026-04-17T23:27:11.135830840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\"" Apr 17 23:27:11.136512 containerd[1473]: time="2026-04-17T23:27:11.136488200Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 17 23:27:11.918609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:27:11.924286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:27:11.927575 containerd[1473]: time="2026-04-17T23:27:11.926384400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:11.930124 containerd[1473]: time="2026-04-17T23:27:11.929391760Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=13800856" Apr 17 23:27:11.932046 containerd[1473]: time="2026-04-17T23:27:11.931990560Z" level=info msg="ImageCreate event name:\"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:11.937518 containerd[1473]: time="2026-04-17T23:27:11.937165440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:11.939522 containerd[1473]: time="2026-04-17T23:27:11.939481840Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"15307493\" in 802.95692ms" Apr 17 23:27:11.939901 containerd[1473]: time="2026-04-17T23:27:11.939527520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\"" Apr 17 23:27:11.940157 containerd[1473]: time="2026-04-17T23:27:11.940094920Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 17 23:27:12.072466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:12.081609 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:27:12.131951 kubelet[1893]: E0417 23:27:12.131861 1893 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:27:12.136128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:27:12.136424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:27:12.759005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818674253.mount: Deactivated successfully. Apr 17 23:27:12.978188 containerd[1473]: time="2026-04-17T23:27:12.978106200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:12.980150 containerd[1473]: time="2026-04-17T23:27:12.980092800Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=22340610" Apr 17 23:27:12.981276 containerd[1473]: time="2026-04-17T23:27:12.981220920Z" level=info msg="ImageCreate event name:\"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:12.983388 containerd[1473]: time="2026-04-17T23:27:12.983326520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:12.984554 containerd[1473]: time="2026-04-17T23:27:12.983911400Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"22339603\" in 1.0437768s" Apr 17 23:27:12.984554 containerd[1473]: time="2026-04-17T23:27:12.983943360Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\"" Apr 17 23:27:12.984554 containerd[1473]: time="2026-04-17T23:27:12.984339480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 17 23:27:13.467296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050708697.mount: Deactivated successfully. Apr 17 23:27:14.434441 containerd[1473]: time="2026-04-17T23:27:14.434392760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:14.436990 containerd[1473]: time="2026-04-17T23:27:14.436958120Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172309" Apr 17 23:27:14.438669 containerd[1473]: time="2026-04-17T23:27:14.438620120Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:14.442120 containerd[1473]: time="2026-04-17T23:27:14.442092360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:14.445406 containerd[1473]: time="2026-04-17T23:27:14.445348960Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.4609798s" Apr 17 23:27:14.445406 containerd[1473]: time="2026-04-17T23:27:14.445401440Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 17 23:27:14.445965 containerd[1473]: time="2026-04-17T23:27:14.445925880Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:27:14.901595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863276133.mount: Deactivated successfully. Apr 17 23:27:14.908130 containerd[1473]: time="2026-04-17T23:27:14.908047120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:14.910100 containerd[1473]: time="2026-04-17T23:27:14.910040320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 17 23:27:14.911812 containerd[1473]: time="2026-04-17T23:27:14.911316040Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:14.914192 containerd[1473]: time="2026-04-17T23:27:14.914144120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:14.915174 containerd[1473]: time="2026-04-17T23:27:14.915141000Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 469.17744ms" Apr 17 23:27:14.915287 containerd[1473]: time="2026-04-17T23:27:14.915267680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 17 23:27:14.917000 containerd[1473]: time="2026-04-17T23:27:14.916961520Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 17 23:27:15.368488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991367774.mount: Deactivated successfully. Apr 17 23:27:16.032826 containerd[1473]: time="2026-04-17T23:27:16.032743640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:16.035035 containerd[1473]: time="2026-04-17T23:27:16.034459600Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21752394" Apr 17 23:27:16.036294 containerd[1473]: time="2026-04-17T23:27:16.036239400Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:16.041348 containerd[1473]: time="2026-04-17T23:27:16.041305200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:16.042643 containerd[1473]: time="2026-04-17T23:27:16.042599040Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.12548076s" Apr 17 23:27:16.042764 containerd[1473]: time="2026-04-17T23:27:16.042747560Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 17 23:27:18.852483 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:18.858463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:27:18.891657 systemd[1]: Reloading requested from client PID 2056 ('systemctl') (unit session-7.scope)... Apr 17 23:27:18.891814 systemd[1]: Reloading... Apr 17 23:27:19.012151 zram_generator::config[2096]: No configuration found. Apr 17 23:27:19.124211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:27:19.194983 systemd[1]: Reloading finished in 302 ms. Apr 17 23:27:19.260518 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:27:19.260708 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:27:19.261397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:19.268470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:27:19.381225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:19.391543 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:27:19.440012 kubelet[2145]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:27:19.955977 kubelet[2145]: I0417 23:27:19.955883 2145 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:27:19.955977 kubelet[2145]: I0417 23:27:19.955950 2145 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:27:19.955977 kubelet[2145]: I0417 23:27:19.955984 2145 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:27:19.955977 kubelet[2145]: I0417 23:27:19.955990 2145 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:27:19.958091 kubelet[2145]: I0417 23:27:19.957143 2145 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:27:19.968262 kubelet[2145]: I0417 23:27:19.968221 2145 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:27:19.968491 kubelet[2145]: E0417 23:27:19.968267 2145 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://142.132.185.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 142.132.185.111:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:27:19.974002 kubelet[2145]: E0417 23:27:19.973940 2145 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:27:19.974244 kubelet[2145]: I0417 23:27:19.974225 2145 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:27:19.976916 kubelet[2145]: I0417 23:27:19.976889 2145 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:27:19.978009 kubelet[2145]: I0417 23:27:19.977968 2145 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:27:19.978298 kubelet[2145]: I0417 23:27:19.978129 2145 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-ddb46eeabf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:27:19.978432 kubelet[2145]: I0417 23:27:19.978417 2145 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:27:19.978495 kubelet[2145]: I0417 23:27:19.978487 2145 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:27:19.978693 kubelet[2145]: I0417 23:27:19.978675 2145 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:27:19.981929 kubelet[2145]: I0417 23:27:19.981902 2145 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:27:19.982400 kubelet[2145]: I0417 23:27:19.982382 2145 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:27:19.982502 kubelet[2145]: I0417 23:27:19.982490 2145 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:27:19.982605 kubelet[2145]: I0417 23:27:19.982592 2145 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:27:19.982684 kubelet[2145]: I0417 23:27:19.982674 2145 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:27:19.986484 kubelet[2145]: I0417 23:27:19.986464 2145 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:27:19.987693 kubelet[2145]: I0417 23:27:19.987662 2145 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:27:19.987801 kubelet[2145]: I0417 23:27:19.987790 2145 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:27:19.987897 kubelet[2145]: W0417 23:27:19.987885 2145 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:27:19.990779 kubelet[2145]: I0417 23:27:19.990760 2145 server.go:1257] "Started kubelet" Apr 17 23:27:19.999901 kubelet[2145]: I0417 23:27:19.999000 2145 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:27:20.012193 kubelet[2145]: E0417 23:27:19.997368 2145 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://142.132.185.111:6443/api/v1/namespaces/default/events\": dial tcp 142.132.185.111:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-ddb46eeabf.18a748a25056f770 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-ddb46eeabf,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-ddb46eeabf,},FirstTimestamp:2026-04-17 23:27:19.99072856 +0000 UTC m=+0.593625321,LastTimestamp:2026-04-17 23:27:19.99072856 +0000 UTC m=+0.593625321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-ddb46eeabf,}" Apr 17 23:27:20.012589 kubelet[2145]: I0417 23:27:20.012566 2145 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:27:20.012703 kubelet[2145]: I0417 23:27:20.012640 2145 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:27:20.013943 kubelet[2145]: I0417 23:27:20.013916 2145 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:27:20.014922 kubelet[2145]: I0417 23:27:20.014899 2145 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:27:20.015144 kubelet[2145]: E0417 23:27:20.015125 2145 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" Apr 17 23:27:20.015262 kubelet[2145]: I0417 23:27:20.015204 2145 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:27:20.015295 kubelet[2145]: I0417 23:27:20.015282 2145 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:27:20.015486 kubelet[2145]: I0417 23:27:20.015466 2145 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:27:20.017447 kubelet[2145]: I0417 23:27:20.017422 2145 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:27:20.018028 kubelet[2145]: I0417 23:27:20.017683 2145 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:27:20.020944 kubelet[2145]: E0417 23:27:20.020909 2145 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.185.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-ddb46eeabf?timeout=10s\": dial tcp 142.132.185.111:6443: connect: connection refused" interval="200ms" Apr 17 23:27:20.021474 kubelet[2145]: I0417 23:27:20.021446 2145 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:27:20.023206 kubelet[2145]: E0417 23:27:20.022841 2145 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:27:20.023410 kubelet[2145]: I0417 23:27:20.023394 2145 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:27:20.023480 kubelet[2145]: I0417 23:27:20.023471 2145 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:27:20.038041 kubelet[2145]: I0417 23:27:20.037936 2145 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:27:20.040226 kubelet[2145]: I0417 23:27:20.040023 2145 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:27:20.040226 kubelet[2145]: I0417 23:27:20.040052 2145 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:27:20.040226 kubelet[2145]: I0417 23:27:20.040087 2145 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:27:20.040226 kubelet[2145]: E0417 23:27:20.040129 2145 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:27:20.053990 kubelet[2145]: I0417 23:27:20.053961 2145 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:27:20.053990 kubelet[2145]: I0417 23:27:20.053979 2145 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:27:20.053990 kubelet[2145]: I0417 23:27:20.053998 2145 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:27:20.057526 kubelet[2145]: I0417 23:27:20.057504 2145 policy_none.go:50] "Start" Apr 17 23:27:20.057526 kubelet[2145]: I0417 23:27:20.057527 2145 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:27:20.057629 kubelet[2145]: I0417 23:27:20.057549 2145 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:27:20.058728 kubelet[2145]: I0417 23:27:20.058686 2145 policy_none.go:44] "Start" Apr 17 23:27:20.064490 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:27:20.080834 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:27:20.086883 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:27:20.095672 kubelet[2145]: E0417 23:27:20.095626 2145 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:27:20.097689 kubelet[2145]: I0417 23:27:20.096963 2145 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:27:20.097689 kubelet[2145]: I0417 23:27:20.096993 2145 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:27:20.097689 kubelet[2145]: I0417 23:27:20.097485 2145 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:27:20.099555 kubelet[2145]: E0417 23:27:20.099419 2145 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:27:20.099555 kubelet[2145]: E0417 23:27:20.099472 2145 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-ddb46eeabf\" not found" Apr 17 23:27:20.157219 systemd[1]: Created slice kubepods-burstable-podcc2032bab80b9c62f3c272e43ab56857.slice - libcontainer container kubepods-burstable-podcc2032bab80b9c62f3c272e43ab56857.slice. Apr 17 23:27:20.175504 kubelet[2145]: E0417 23:27:20.175413 2145 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.181301 systemd[1]: Created slice kubepods-burstable-pod6ef49a8c996a6a0999a0e4ebc2368b0b.slice - libcontainer container kubepods-burstable-pod6ef49a8c996a6a0999a0e4ebc2368b0b.slice. Apr 17 23:27:20.184579 kubelet[2145]: E0417 23:27:20.184388 2145 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.186305 systemd[1]: Created slice kubepods-burstable-pod406d698ed8d849538bd437adc12e78bf.slice - libcontainer container kubepods-burstable-pod406d698ed8d849538bd437adc12e78bf.slice. Apr 17 23:27:20.188314 kubelet[2145]: E0417 23:27:20.188291 2145 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.201169 kubelet[2145]: I0417 23:27:20.201134 2145 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.202153 kubelet[2145]: E0417 23:27:20.202108 2145 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://142.132.185.111:6443/api/v1/nodes\": dial tcp 142.132.185.111:6443: connect: connection refused" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.222400 kubelet[2145]: E0417 23:27:20.222223 2145 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.185.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-ddb46eeabf?timeout=10s\": dial tcp 142.132.185.111:6443: connect: connection refused" interval="400ms" Apr 17 23:27:20.319292 kubelet[2145]: I0417 23:27:20.319197 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.319936 kubelet[2145]: I0417 23:27:20.319343 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc2032bab80b9c62f3c272e43ab56857-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-ddb46eeabf\" (UID: \"cc2032bab80b9c62f3c272e43ab56857\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.319936 kubelet[2145]: I0417 23:27:20.319406 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc2032bab80b9c62f3c272e43ab56857-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-ddb46eeabf\" (UID: \"cc2032bab80b9c62f3c272e43ab56857\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.319936 kubelet[2145]: I0417 23:27:20.319483 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc2032bab80b9c62f3c272e43ab56857-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-ddb46eeabf\" (UID: \"cc2032bab80b9c62f3c272e43ab56857\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.319936 kubelet[2145]: I0417 23:27:20.319533 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.319936 kubelet[2145]: I0417 23:27:20.319606 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.320295 kubelet[2145]: I0417 23:27:20.319658 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/406d698ed8d849538bd437adc12e78bf-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-ddb46eeabf\" (UID: \"406d698ed8d849538bd437adc12e78bf\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.320295 kubelet[2145]: I0417 23:27:20.319694 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.320295 kubelet[2145]: I0417 23:27:20.319738 2145 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.405687 kubelet[2145]: I0417 23:27:20.405639 2145 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.406242 kubelet[2145]: E0417 23:27:20.406168 2145 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://142.132.185.111:6443/api/v1/nodes\": dial tcp 142.132.185.111:6443: connect: connection refused" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.481453 containerd[1473]: time="2026-04-17T23:27:20.480767040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-ddb46eeabf,Uid:cc2032bab80b9c62f3c272e43ab56857,Namespace:kube-system,Attempt:0,}" Apr 17 23:27:20.488133 containerd[1473]: time="2026-04-17T23:27:20.488084400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-ddb46eeabf,Uid:6ef49a8c996a6a0999a0e4ebc2368b0b,Namespace:kube-system,Attempt:0,}" Apr 17 23:27:20.490985 containerd[1473]: time="2026-04-17T23:27:20.490947800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-ddb46eeabf,Uid:406d698ed8d849538bd437adc12e78bf,Namespace:kube-system,Attempt:0,}" Apr 17 23:27:20.622944 kubelet[2145]: E0417 23:27:20.622862 2145 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.185.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-ddb46eeabf?timeout=10s\": dial tcp 142.132.185.111:6443: connect: connection refused" interval="800ms" Apr 17 23:27:20.808914 kubelet[2145]: I0417 23:27:20.808892 2145 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.809495 kubelet[2145]: E0417 23:27:20.809463 2145 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://142.132.185.111:6443/api/v1/nodes\": dial tcp 142.132.185.111:6443: connect: connection refused" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:20.975592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444104418.mount: Deactivated successfully. Apr 17 23:27:20.982421 containerd[1473]: time="2026-04-17T23:27:20.981521840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:27:20.983147 containerd[1473]: time="2026-04-17T23:27:20.983119160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 17 23:27:20.986893 containerd[1473]: time="2026-04-17T23:27:20.986842040Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:27:20.988375 containerd[1473]: time="2026-04-17T23:27:20.988279160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:27:20.991079 containerd[1473]: time="2026-04-17T23:27:20.990370400Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:27:20.991154 containerd[1473]: time="2026-04-17T23:27:20.991113720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:27:20.991328 containerd[1473]: time="2026-04-17T23:27:20.991304440Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:27:20.992249 containerd[1473]: time="2026-04-17T23:27:20.992218800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.3302ms" Apr 17 23:27:20.994717 containerd[1473]: time="2026-04-17T23:27:20.994682240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:27:20.997077 containerd[1473]: time="2026-04-17T23:27:20.996488640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.31796ms" Apr 17 23:27:21.006780 containerd[1473]: time="2026-04-17T23:27:21.003748440Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 512.50804ms" Apr 17 23:27:21.118609 containerd[1473]: time="2026-04-17T23:27:21.118259880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:27:21.118609 containerd[1473]: time="2026-04-17T23:27:21.118321320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:27:21.118609 containerd[1473]: time="2026-04-17T23:27:21.118337600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:21.118609 containerd[1473]: time="2026-04-17T23:27:21.118411960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:21.118609 containerd[1473]: time="2026-04-17T23:27:21.117976560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:27:21.118609 containerd[1473]: time="2026-04-17T23:27:21.118040400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:27:21.118609 containerd[1473]: time="2026-04-17T23:27:21.118066840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:21.118609 containerd[1473]: time="2026-04-17T23:27:21.118156240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:21.124306 containerd[1473]: time="2026-04-17T23:27:21.123791680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:27:21.124489 containerd[1473]: time="2026-04-17T23:27:21.124433600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:27:21.124689 containerd[1473]: time="2026-04-17T23:27:21.124552640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:21.125204 containerd[1473]: time="2026-04-17T23:27:21.125092200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:21.148534 systemd[1]: Started cri-containerd-b4442f683ccea537c6474e4585270d4e737230f3f1761f9c984b9842a67bac53.scope - libcontainer container b4442f683ccea537c6474e4585270d4e737230f3f1761f9c984b9842a67bac53. Apr 17 23:27:21.153261 systemd[1]: Started cri-containerd-0b98d6fc53dd1f118c6bf10d7a6686072ddaf3c110d93ff58867b05ec35b0f10.scope - libcontainer container 0b98d6fc53dd1f118c6bf10d7a6686072ddaf3c110d93ff58867b05ec35b0f10. Apr 17 23:27:21.155132 systemd[1]: Started cri-containerd-d33d2c990542cdcf3ab7bd286100938a0f988ba628818ecae160435325fa1e1b.scope - libcontainer container d33d2c990542cdcf3ab7bd286100938a0f988ba628818ecae160435325fa1e1b. Apr 17 23:27:21.207678 containerd[1473]: time="2026-04-17T23:27:21.207570200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-ddb46eeabf,Uid:cc2032bab80b9c62f3c272e43ab56857,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4442f683ccea537c6474e4585270d4e737230f3f1761f9c984b9842a67bac53\"" Apr 17 23:27:21.216568 containerd[1473]: time="2026-04-17T23:27:21.216406440Z" level=info msg="CreateContainer within sandbox \"b4442f683ccea537c6474e4585270d4e737230f3f1761f9c984b9842a67bac53\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:27:21.222464 containerd[1473]: time="2026-04-17T23:27:21.222409760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-ddb46eeabf,Uid:406d698ed8d849538bd437adc12e78bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d33d2c990542cdcf3ab7bd286100938a0f988ba628818ecae160435325fa1e1b\"" Apr 17 23:27:21.224960 kubelet[2145]: E0417 23:27:21.224351 2145 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://142.132.185.111:6443/api/v1/namespaces/default/events\": dial tcp 142.132.185.111:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-ddb46eeabf.18a748a25056f770 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-ddb46eeabf,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-ddb46eeabf,},FirstTimestamp:2026-04-17 23:27:19.99072856 +0000 UTC m=+0.593625321,LastTimestamp:2026-04-17 23:27:19.99072856 +0000 UTC m=+0.593625321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-ddb46eeabf,}" Apr 17 23:27:21.227208 containerd[1473]: time="2026-04-17T23:27:21.226848920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-ddb46eeabf,Uid:6ef49a8c996a6a0999a0e4ebc2368b0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b98d6fc53dd1f118c6bf10d7a6686072ddaf3c110d93ff58867b05ec35b0f10\"" Apr 17 23:27:21.232014 containerd[1473]: time="2026-04-17T23:27:21.231978160Z" level=info msg="CreateContainer within sandbox \"d33d2c990542cdcf3ab7bd286100938a0f988ba628818ecae160435325fa1e1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:27:21.234906 containerd[1473]: time="2026-04-17T23:27:21.234866960Z" level=info msg="CreateContainer within sandbox \"0b98d6fc53dd1f118c6bf10d7a6686072ddaf3c110d93ff58867b05ec35b0f10\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:27:21.243634 containerd[1473]: time="2026-04-17T23:27:21.243585040Z" level=info msg="CreateContainer within sandbox \"b4442f683ccea537c6474e4585270d4e737230f3f1761f9c984b9842a67bac53\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"804aa119f6c0ff60b4833e7f96ec2c983922236407a14430ae68578cf3cf5067\"" Apr 17 23:27:21.244881 containerd[1473]: time="2026-04-17T23:27:21.244840360Z" level=info msg="StartContainer for \"804aa119f6c0ff60b4833e7f96ec2c983922236407a14430ae68578cf3cf5067\"" Apr 17 23:27:21.256351 containerd[1473]: time="2026-04-17T23:27:21.256230240Z" level=info msg="CreateContainer within sandbox \"d33d2c990542cdcf3ab7bd286100938a0f988ba628818ecae160435325fa1e1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d\"" Apr 17 23:27:21.258735 containerd[1473]: time="2026-04-17T23:27:21.257164280Z" level=info msg="StartContainer for \"e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d\"" Apr 17 23:27:21.260573 containerd[1473]: time="2026-04-17T23:27:21.260499760Z" level=info msg="CreateContainer within sandbox \"0b98d6fc53dd1f118c6bf10d7a6686072ddaf3c110d93ff58867b05ec35b0f10\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00\"" Apr 17 23:27:21.261229 containerd[1473]: time="2026-04-17T23:27:21.261136520Z" level=info msg="StartContainer for \"d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00\"" Apr 17 23:27:21.281258 systemd[1]: Started cri-containerd-804aa119f6c0ff60b4833e7f96ec2c983922236407a14430ae68578cf3cf5067.scope - libcontainer container 804aa119f6c0ff60b4833e7f96ec2c983922236407a14430ae68578cf3cf5067. Apr 17 23:27:21.313395 systemd[1]: Started cri-containerd-d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00.scope - libcontainer container d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00. Apr 17 23:27:21.323722 systemd[1]: Started cri-containerd-e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d.scope - libcontainer container e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d. Apr 17 23:27:21.330151 containerd[1473]: time="2026-04-17T23:27:21.330042600Z" level=info msg="StartContainer for \"804aa119f6c0ff60b4833e7f96ec2c983922236407a14430ae68578cf3cf5067\" returns successfully" Apr 17 23:27:21.382161 containerd[1473]: time="2026-04-17T23:27:21.380588680Z" level=info msg="StartContainer for \"d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00\" returns successfully" Apr 17 23:27:21.398348 containerd[1473]: time="2026-04-17T23:27:21.398200920Z" level=info msg="StartContainer for \"e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d\" returns successfully" Apr 17 23:27:21.424156 kubelet[2145]: E0417 23:27:21.424088 2145 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.185.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-ddb46eeabf?timeout=10s\": dial tcp 142.132.185.111:6443: connect: connection refused" interval="1.6s" Apr 17 23:27:21.613110 kubelet[2145]: I0417 23:27:21.612573 2145 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:22.059312 kubelet[2145]: E0417 23:27:22.059265 2145 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:22.064795 kubelet[2145]: E0417 23:27:22.064142 2145 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:22.067702 kubelet[2145]: E0417 23:27:22.067475 2145 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.067555 kubelet[2145]: E0417 23:27:23.067467 2145 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.068673 kubelet[2145]: E0417 23:27:23.068647 2145 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.073204 kubelet[2145]: E0417 23:27:23.073146 2145 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-ddb46eeabf\" not found" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.134047 kubelet[2145]: I0417 23:27:23.133309 2145 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.217891 kubelet[2145]: I0417 23:27:23.217838 2145 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.227233 kubelet[2145]: E0417 23:27:23.226735 2145 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-ddb46eeabf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.227233 kubelet[2145]: I0417 23:27:23.226768 2145 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.229888 kubelet[2145]: E0417 23:27:23.229756 2145 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.229888 kubelet[2145]: I0417 23:27:23.229810 2145 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.232696 kubelet[2145]: E0417 23:27:23.232651 2145 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-ddb46eeabf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:23.988683 kubelet[2145]: I0417 23:27:23.987842 2145 apiserver.go:52] "Watching apiserver" Apr 17 23:27:24.018229 kubelet[2145]: I0417 23:27:24.018167 2145 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:27:24.068874 kubelet[2145]: I0417 23:27:24.068823 2145 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:25.346222 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-7.scope)... Apr 17 23:27:25.346240 systemd[1]: Reloading... Apr 17 23:27:25.443165 zram_generator::config[2475]: No configuration found. Apr 17 23:27:25.553928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:27:25.638946 systemd[1]: Reloading finished in 292 ms. Apr 17 23:27:25.680716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:27:25.697979 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:27:25.698439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:25.703700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:27:25.827633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:25.839668 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:27:25.896978 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:27:25.907770 kubelet[2519]: I0417 23:27:25.907687 2519 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:27:25.907770 kubelet[2519]: I0417 23:27:25.907751 2519 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:27:25.907770 kubelet[2519]: I0417 23:27:25.907778 2519 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:27:25.907770 kubelet[2519]: I0417 23:27:25.907785 2519 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:27:25.908123 kubelet[2519]: I0417 23:27:25.908082 2519 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:27:25.909589 kubelet[2519]: I0417 23:27:25.909557 2519 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:27:25.912090 kubelet[2519]: I0417 23:27:25.911824 2519 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:27:25.915500 kubelet[2519]: E0417 23:27:25.915358 2519 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:27:25.915500 kubelet[2519]: I0417 23:27:25.915411 2519 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:27:25.921050 kubelet[2519]: I0417 23:27:25.921026 2519 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:27:25.922204 kubelet[2519]: I0417 23:27:25.921383 2519 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:27:25.922204 kubelet[2519]: I0417 23:27:25.921419 2519 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-ddb46eeabf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:27:25.922204 kubelet[2519]: I0417 23:27:25.921625 2519 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:27:25.922204 kubelet[2519]: I0417 23:27:25.921634 2519 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:27:25.922427 kubelet[2519]: I0417 23:27:25.921656 2519 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:27:25.922427 kubelet[2519]: I0417 23:27:25.921861 2519 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:27:25.922427 kubelet[2519]: I0417 23:27:25.922035 2519 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:27:25.922427 kubelet[2519]: I0417 23:27:25.922051 2519 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:27:25.922427 kubelet[2519]: I0417 23:27:25.922090 2519 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:27:25.922427 kubelet[2519]: I0417 23:27:25.922100 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:27:25.930121 kubelet[2519]: I0417 23:27:25.928143 2519 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:27:25.930121 kubelet[2519]: I0417 23:27:25.929098 2519 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:27:25.930121 kubelet[2519]: I0417 23:27:25.929128 2519 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:27:25.936439 kubelet[2519]: I0417 23:27:25.936319 2519 server.go:1257] "Started kubelet" Apr 17 23:27:25.944319 kubelet[2519]: I0417 23:27:25.944255 2519 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:27:25.956187 kubelet[2519]: I0417 23:27:25.956123 2519 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:27:25.959874 kubelet[2519]: I0417 23:27:25.958125 2519 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:27:25.963852 kubelet[2519]: I0417 23:27:25.963789 2519 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:27:25.964092 kubelet[2519]: I0417 23:27:25.964011 2519 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:27:25.964318 kubelet[2519]: I0417 23:27:25.964302 2519 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:27:25.964715 kubelet[2519]: I0417 23:27:25.964691 2519 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:27:25.967587 kubelet[2519]: I0417 23:27:25.967567 2519 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:27:25.968335 kubelet[2519]: E0417 23:27:25.968312 2519 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-ddb46eeabf\" not found" Apr 17 23:27:25.973021 kubelet[2519]: I0417 23:27:25.972999 2519 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:27:25.973370 kubelet[2519]: I0417 23:27:25.973239 2519 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:27:25.976260 kubelet[2519]: I0417 23:27:25.976153 2519 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:27:25.978371 kubelet[2519]: I0417 23:27:25.977830 2519 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:27:25.978371 kubelet[2519]: I0417 23:27:25.977862 2519 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:27:25.978371 kubelet[2519]: I0417 23:27:25.977880 2519 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:27:25.978371 kubelet[2519]: E0417 23:27:25.977917 2519 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:27:25.989178 kubelet[2519]: I0417 23:27:25.988973 2519 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:27:25.989178 kubelet[2519]: I0417 23:27:25.989155 2519 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:27:25.991121 kubelet[2519]: I0417 23:27:25.990821 2519 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:27:25.993938 kubelet[2519]: E0417 23:27:25.993898 2519 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:27:26.047471 kubelet[2519]: I0417 23:27:26.047421 2519 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:27:26.048214 kubelet[2519]: I0417 23:27:26.047441 2519 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:27:26.048214 kubelet[2519]: I0417 23:27:26.047538 2519 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:27:26.048753 kubelet[2519]: I0417 23:27:26.048706 2519 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 17 23:27:26.048790 kubelet[2519]: I0417 23:27:26.048754 2519 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 17 23:27:26.048790 kubelet[2519]: I0417 23:27:26.048783 2519 policy_none.go:50] "Start" Apr 17 23:27:26.048843 kubelet[2519]: I0417 23:27:26.048793 2519 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:27:26.048843 kubelet[2519]: I0417 23:27:26.048815 2519 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:27:26.049003 kubelet[2519]: I0417 23:27:26.048988 2519 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:27:26.049029 kubelet[2519]: I0417 23:27:26.049010 2519 policy_none.go:44] "Start" Apr 17 23:27:26.054949 kubelet[2519]: E0417 23:27:26.054923 2519 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:27:26.055252 kubelet[2519]: I0417 23:27:26.055116 2519 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:27:26.055252 kubelet[2519]: I0417 23:27:26.055132 2519 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:27:26.055400 kubelet[2519]: I0417 23:27:26.055374 2519 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:27:26.058247 kubelet[2519]: E0417 23:27:26.058176 2519 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:27:26.079635 kubelet[2519]: I0417 23:27:26.079207 2519 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.080032 kubelet[2519]: I0417 23:27:26.080005 2519 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.080591 kubelet[2519]: I0417 23:27:26.080556 2519 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.090471 kubelet[2519]: E0417 23:27:26.090377 2519 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-ddb46eeabf\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.159881 kubelet[2519]: I0417 23:27:26.159241 2519 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.171626 kubelet[2519]: I0417 23:27:26.171363 2519 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.171626 kubelet[2519]: I0417 23:27:26.171540 2519 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.174077 kubelet[2519]: I0417 23:27:26.173631 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/406d698ed8d849538bd437adc12e78bf-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-ddb46eeabf\" (UID: \"406d698ed8d849538bd437adc12e78bf\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.174077 kubelet[2519]: I0417 23:27:26.173672 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc2032bab80b9c62f3c272e43ab56857-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-ddb46eeabf\" (UID: \"cc2032bab80b9c62f3c272e43ab56857\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.174077 kubelet[2519]: I0417 23:27:26.173692 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc2032bab80b9c62f3c272e43ab56857-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-ddb46eeabf\" (UID: \"cc2032bab80b9c62f3c272e43ab56857\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.174077 kubelet[2519]: I0417 23:27:26.173707 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.174077 kubelet[2519]: I0417 23:27:26.173768 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.174278 kubelet[2519]: I0417 23:27:26.173784 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.174278 kubelet[2519]: I0417 23:27:26.173800 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.174278 kubelet[2519]: I0417 23:27:26.173818 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc2032bab80b9c62f3c272e43ab56857-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-ddb46eeabf\" (UID: \"cc2032bab80b9c62f3c272e43ab56857\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.174278 kubelet[2519]: I0417 23:27:26.173851 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ef49a8c996a6a0999a0e4ebc2368b0b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-ddb46eeabf\" (UID: \"6ef49a8c996a6a0999a0e4ebc2368b0b\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:26.924482 kubelet[2519]: I0417 23:27:26.924445 2519 apiserver.go:52] "Watching apiserver" Apr 17 23:27:26.973368 kubelet[2519]: I0417 23:27:26.973322 2519 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:27:27.020260 kubelet[2519]: I0417 23:27:27.020167 2519 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:27.031289 kubelet[2519]: E0417 23:27:27.031031 2519 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-ddb46eeabf\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" Apr 17 23:27:27.031289 kubelet[2519]: I0417 23:27:27.031070 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-ddb46eeabf" podStartSLOduration=1.03103248 podStartE2EDuration="1.03103248s" podCreationTimestamp="2026-04-17 23:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:27:27.01666444 +0000 UTC m=+1.172486921" watchObservedRunningTime="2026-04-17 23:27:27.03103248 +0000 UTC m=+1.186854961" Apr 17 23:27:27.047078 kubelet[2519]: I0417 23:27:27.046269 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-ddb46eeabf" podStartSLOduration=3.04625312 podStartE2EDuration="3.04625312s" podCreationTimestamp="2026-04-17 23:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:27:27.03219576 +0000 UTC m=+1.188018241" watchObservedRunningTime="2026-04-17 23:27:27.04625312 +0000 UTC m=+1.202075601" Apr 17 23:27:27.067329 kubelet[2519]: I0417 23:27:27.067264 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-ddb46eeabf" podStartSLOduration=1.0672498 podStartE2EDuration="1.0672498s" podCreationTimestamp="2026-04-17 23:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:27:27.0505454 +0000 UTC m=+1.206367881" watchObservedRunningTime="2026-04-17 23:27:27.0672498 +0000 UTC m=+1.223072281" Apr 17 23:27:31.705854 kubelet[2519]: I0417 23:27:31.705798 2519 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:27:31.707496 containerd[1473]: time="2026-04-17T23:27:31.707312600Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:27:31.707837 kubelet[2519]: I0417 23:27:31.707624 2519 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:27:32.699479 systemd[1]: Created slice kubepods-besteffort-pod405f57d1_7e8a_4c4d_ae91_a8a9c5c2ecae.slice - libcontainer container kubepods-besteffort-pod405f57d1_7e8a_4c4d_ae91_a8a9c5c2ecae.slice. Apr 17 23:27:32.719028 kubelet[2519]: I0417 23:27:32.718859 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae-xtables-lock\") pod \"kube-proxy-8tdpd\" (UID: \"405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae\") " pod="kube-system/kube-proxy-8tdpd" Apr 17 23:27:32.719028 kubelet[2519]: I0417 23:27:32.718905 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae-kube-proxy\") pod \"kube-proxy-8tdpd\" (UID: \"405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae\") " pod="kube-system/kube-proxy-8tdpd" Apr 17 23:27:32.719028 kubelet[2519]: I0417 23:27:32.718921 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae-lib-modules\") pod \"kube-proxy-8tdpd\" (UID: \"405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae\") " pod="kube-system/kube-proxy-8tdpd" Apr 17 23:27:32.719028 kubelet[2519]: I0417 23:27:32.718939 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2977l\" (UniqueName: \"kubernetes.io/projected/405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae-kube-api-access-2977l\") pod \"kube-proxy-8tdpd\" (UID: \"405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae\") " pod="kube-system/kube-proxy-8tdpd" Apr 17 23:27:33.009210 systemd[1]: Created slice kubepods-besteffort-pod751cb202_c6e0_4619_8249_dc5755756f86.slice - libcontainer container kubepods-besteffort-pod751cb202_c6e0_4619_8249_dc5755756f86.slice. Apr 17 23:27:33.014229 containerd[1473]: time="2026-04-17T23:27:33.012918320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8tdpd,Uid:405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae,Namespace:kube-system,Attempt:0,}" Apr 17 23:27:33.021528 kubelet[2519]: I0417 23:27:33.021480 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/751cb202-c6e0-4619-8249-dc5755756f86-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-ds445\" (UID: \"751cb202-c6e0-4619-8249-dc5755756f86\") " pod="tigera-operator/tigera-operator-6cf4cccc57-ds445" Apr 17 23:27:33.021769 kubelet[2519]: I0417 23:27:33.021700 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6h4l\" (UniqueName: \"kubernetes.io/projected/751cb202-c6e0-4619-8249-dc5755756f86-kube-api-access-k6h4l\") pod \"tigera-operator-6cf4cccc57-ds445\" (UID: \"751cb202-c6e0-4619-8249-dc5755756f86\") " pod="tigera-operator/tigera-operator-6cf4cccc57-ds445" Apr 17 23:27:33.053638 containerd[1473]: time="2026-04-17T23:27:33.053305120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:27:33.053638 containerd[1473]: time="2026-04-17T23:27:33.053408320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:27:33.053638 containerd[1473]: time="2026-04-17T23:27:33.053434360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:33.053638 containerd[1473]: time="2026-04-17T23:27:33.053515920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:33.081345 systemd[1]: Started cri-containerd-8270814e704618446bb3e6c415dbfe6dcab4a5f62ad754b2d777fc9373afeaae.scope - libcontainer container 8270814e704618446bb3e6c415dbfe6dcab4a5f62ad754b2d777fc9373afeaae. Apr 17 23:27:33.110608 containerd[1473]: time="2026-04-17T23:27:33.110567600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8tdpd,Uid:405f57d1-7e8a-4c4d-ae91-a8a9c5c2ecae,Namespace:kube-system,Attempt:0,} returns sandbox id \"8270814e704618446bb3e6c415dbfe6dcab4a5f62ad754b2d777fc9373afeaae\"" Apr 17 23:27:33.118070 containerd[1473]: time="2026-04-17T23:27:33.117950920Z" level=info msg="CreateContainer within sandbox \"8270814e704618446bb3e6c415dbfe6dcab4a5f62ad754b2d777fc9373afeaae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:27:33.137872 containerd[1473]: time="2026-04-17T23:27:33.137824000Z" level=info msg="CreateContainer within sandbox \"8270814e704618446bb3e6c415dbfe6dcab4a5f62ad754b2d777fc9373afeaae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e6cf691f536c433df61ebdb5866bef30070d9a00ec4af767aa73c5e22017ab9a\"" Apr 17 23:27:33.139136 containerd[1473]: time="2026-04-17T23:27:33.138965160Z" level=info msg="StartContainer for \"e6cf691f536c433df61ebdb5866bef30070d9a00ec4af767aa73c5e22017ab9a\"" Apr 17 23:27:33.175250 systemd[1]: Started cri-containerd-e6cf691f536c433df61ebdb5866bef30070d9a00ec4af767aa73c5e22017ab9a.scope - libcontainer container e6cf691f536c433df61ebdb5866bef30070d9a00ec4af767aa73c5e22017ab9a. Apr 17 23:27:33.205420 containerd[1473]: time="2026-04-17T23:27:33.205172120Z" level=info msg="StartContainer for \"e6cf691f536c433df61ebdb5866bef30070d9a00ec4af767aa73c5e22017ab9a\" returns successfully" Apr 17 23:27:33.319238 containerd[1473]: time="2026-04-17T23:27:33.319130760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-ds445,Uid:751cb202-c6e0-4619-8249-dc5755756f86,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:27:33.350719 containerd[1473]: time="2026-04-17T23:27:33.350605040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:27:33.350719 containerd[1473]: time="2026-04-17T23:27:33.350663760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:27:33.350719 containerd[1473]: time="2026-04-17T23:27:33.350690800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:33.350963 containerd[1473]: time="2026-04-17T23:27:33.350817720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:33.367249 systemd[1]: Started cri-containerd-8606d61190e9cd81a6385a36bc44d49658228f0ede04325e1a5f91515b58deec.scope - libcontainer container 8606d61190e9cd81a6385a36bc44d49658228f0ede04325e1a5f91515b58deec. Apr 17 23:27:33.400413 containerd[1473]: time="2026-04-17T23:27:33.400366040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-ds445,Uid:751cb202-c6e0-4619-8249-dc5755756f86,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8606d61190e9cd81a6385a36bc44d49658228f0ede04325e1a5f91515b58deec\"" Apr 17 23:27:33.402754 containerd[1473]: time="2026-04-17T23:27:33.402713760Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:27:35.996686 kubelet[2519]: I0417 23:27:35.996299 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-8tdpd" podStartSLOduration=3.9962763199999998 podStartE2EDuration="3.99627632s" podCreationTimestamp="2026-04-17 23:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:27:34.05915452 +0000 UTC m=+8.214977081" watchObservedRunningTime="2026-04-17 23:27:35.99627632 +0000 UTC m=+10.152098801" Apr 17 23:27:36.163793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527377262.mount: Deactivated successfully. Apr 17 23:27:36.605515 containerd[1473]: time="2026-04-17T23:27:36.605157960Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:36.606790 containerd[1473]: time="2026-04-17T23:27:36.606748560Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 17 23:27:36.607522 containerd[1473]: time="2026-04-17T23:27:36.607226800Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:36.613704 containerd[1473]: time="2026-04-17T23:27:36.613269680Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:36.617000 containerd[1473]: time="2026-04-17T23:27:36.616947440Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 3.2141886s" Apr 17 23:27:36.617000 containerd[1473]: time="2026-04-17T23:27:36.616986320Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 17 23:27:36.623345 containerd[1473]: time="2026-04-17T23:27:36.623304280Z" level=info msg="CreateContainer within sandbox \"8606d61190e9cd81a6385a36bc44d49658228f0ede04325e1a5f91515b58deec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:27:36.646247 containerd[1473]: time="2026-04-17T23:27:36.646181120Z" level=info msg="CreateContainer within sandbox \"8606d61190e9cd81a6385a36bc44d49658228f0ede04325e1a5f91515b58deec\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e\"" Apr 17 23:27:36.648488 containerd[1473]: time="2026-04-17T23:27:36.648430240Z" level=info msg="StartContainer for \"a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e\"" Apr 17 23:27:36.680258 systemd[1]: Started cri-containerd-a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e.scope - libcontainer container a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e. Apr 17 23:27:36.709321 containerd[1473]: time="2026-04-17T23:27:36.709226880Z" level=info msg="StartContainer for \"a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e\" returns successfully" Apr 17 23:27:38.314051 systemd-timesyncd[1354]: Contacted time server 128.140.109.119:123 (2.flatcar.pool.ntp.org). Apr 17 23:27:38.314706 systemd-timesyncd[1354]: Initial clock synchronization to Fri 2026-04-17 23:27:38.555212 UTC. Apr 17 23:27:39.832421 kubelet[2519]: I0417 23:27:39.831688 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-ds445" podStartSLOduration=4.6156616459999995 podStartE2EDuration="7.831673846s" podCreationTimestamp="2026-04-17 23:27:32 +0000 UTC" firstStartedPulling="2026-04-17 23:27:33.40214468 +0000 UTC m=+7.557967161" lastFinishedPulling="2026-04-17 23:27:36.61815688 +0000 UTC m=+10.773979361" observedRunningTime="2026-04-17 23:27:37.06674504 +0000 UTC m=+11.222567521" watchObservedRunningTime="2026-04-17 23:27:39.831673846 +0000 UTC m=+13.987496328" Apr 17 23:27:42.706493 sudo[1663]: pam_unix(sudo:session): session closed for user root Apr 17 23:27:42.723696 sshd[1660]: pam_unix(sshd:session): session closed for user core Apr 17 23:27:42.728695 systemd[1]: sshd@6-142.132.185.111:22-50.85.169.122:53372.service: Deactivated successfully. Apr 17 23:27:42.734147 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:27:42.734907 systemd[1]: session-7.scope: Consumed 5.071s CPU time, 154.0M memory peak, 0B memory swap peak. Apr 17 23:27:42.738259 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:27:42.740939 systemd-logind[1453]: Removed session 7. Apr 17 23:27:44.901176 update_engine[1454]: I20260417 23:27:44.901101 1454 update_attempter.cc:509] Updating boot flags... Apr 17 23:27:44.969127 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2921) Apr 17 23:27:48.380131 systemd[1]: Created slice kubepods-besteffort-pod41cd8df0_f96b_46c9_a778_f016e59f0bdd.slice - libcontainer container kubepods-besteffort-pod41cd8df0_f96b_46c9_a778_f016e59f0bdd.slice. Apr 17 23:27:48.432087 kubelet[2519]: I0417 23:27:48.431816 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41cd8df0-f96b-46c9-a778-f016e59f0bdd-typha-certs\") pod \"calico-typha-5db95c4984-d6vwz\" (UID: \"41cd8df0-f96b-46c9-a778-f016e59f0bdd\") " pod="calico-system/calico-typha-5db95c4984-d6vwz" Apr 17 23:27:48.432087 kubelet[2519]: I0417 23:27:48.431881 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb92w\" (UniqueName: \"kubernetes.io/projected/41cd8df0-f96b-46c9-a778-f016e59f0bdd-kube-api-access-sb92w\") pod \"calico-typha-5db95c4984-d6vwz\" (UID: \"41cd8df0-f96b-46c9-a778-f016e59f0bdd\") " pod="calico-system/calico-typha-5db95c4984-d6vwz" Apr 17 23:27:48.432087 kubelet[2519]: I0417 23:27:48.431997 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cd8df0-f96b-46c9-a778-f016e59f0bdd-tigera-ca-bundle\") pod \"calico-typha-5db95c4984-d6vwz\" (UID: \"41cd8df0-f96b-46c9-a778-f016e59f0bdd\") " pod="calico-system/calico-typha-5db95c4984-d6vwz" Apr 17 23:27:48.476783 systemd[1]: Created slice kubepods-besteffort-pod0c061ef6_aeca_4b7f_b9b7_e336da0ed154.slice - libcontainer container kubepods-besteffort-pod0c061ef6_aeca_4b7f_b9b7_e336da0ed154.slice. Apr 17 23:27:48.533440 kubelet[2519]: I0417 23:27:48.532877 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-flexvol-driver-host\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.533440 kubelet[2519]: I0417 23:27:48.533006 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-sys-fs\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.533440 kubelet[2519]: I0417 23:27:48.533080 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-cni-net-dir\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.533440 kubelet[2519]: I0417 23:27:48.533116 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-lib-modules\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.533440 kubelet[2519]: I0417 23:27:48.533157 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-node-certs\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.533793 kubelet[2519]: I0417 23:27:48.533225 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-tigera-ca-bundle\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.533793 kubelet[2519]: I0417 23:27:48.533258 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-xtables-lock\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.533793 kubelet[2519]: I0417 23:27:48.533330 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-cni-log-dir\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.533793 kubelet[2519]: I0417 23:27:48.533350 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-nodeproc\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.533793 kubelet[2519]: I0417 23:27:48.533672 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-var-lib-calico\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.534012 kubelet[2519]: I0417 23:27:48.533695 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-var-run-calico\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.534012 kubelet[2519]: I0417 23:27:48.533743 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-policysync\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.534012 kubelet[2519]: I0417 23:27:48.533760 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccx4c\" (UniqueName: \"kubernetes.io/projected/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-kube-api-access-ccx4c\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.534012 kubelet[2519]: I0417 23:27:48.533786 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-bpffs\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.534012 kubelet[2519]: I0417 23:27:48.533803 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0c061ef6-aeca-4b7f-b9b7-e336da0ed154-cni-bin-dir\") pod \"calico-node-v75dm\" (UID: \"0c061ef6-aeca-4b7f-b9b7-e336da0ed154\") " pod="calico-system/calico-node-v75dm" Apr 17 23:27:48.591389 kubelet[2519]: E0417 23:27:48.591347 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:27:48.634875 kubelet[2519]: I0417 23:27:48.634680 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35637d90-8ee7-47e3-a79b-be82b4dd0107-kubelet-dir\") pod \"csi-node-driver-h44sh\" (UID: \"35637d90-8ee7-47e3-a79b-be82b4dd0107\") " pod="calico-system/csi-node-driver-h44sh" Apr 17 23:27:48.636588 kubelet[2519]: I0417 23:27:48.636331 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/35637d90-8ee7-47e3-a79b-be82b4dd0107-registration-dir\") pod \"csi-node-driver-h44sh\" (UID: \"35637d90-8ee7-47e3-a79b-be82b4dd0107\") " pod="calico-system/csi-node-driver-h44sh" Apr 17 23:27:48.636588 kubelet[2519]: I0417 23:27:48.636548 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/35637d90-8ee7-47e3-a79b-be82b4dd0107-socket-dir\") pod \"csi-node-driver-h44sh\" (UID: \"35637d90-8ee7-47e3-a79b-be82b4dd0107\") " pod="calico-system/csi-node-driver-h44sh" Apr 17 23:27:48.636588 kubelet[2519]: I0417 23:27:48.636568 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/35637d90-8ee7-47e3-a79b-be82b4dd0107-varrun\") pod \"csi-node-driver-h44sh\" (UID: \"35637d90-8ee7-47e3-a79b-be82b4dd0107\") " pod="calico-system/csi-node-driver-h44sh" Apr 17 23:27:48.636772 kubelet[2519]: I0417 23:27:48.636671 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbh9j\" (UniqueName: \"kubernetes.io/projected/35637d90-8ee7-47e3-a79b-be82b4dd0107-kube-api-access-vbh9j\") pod \"csi-node-driver-h44sh\" (UID: \"35637d90-8ee7-47e3-a79b-be82b4dd0107\") " pod="calico-system/csi-node-driver-h44sh" Apr 17 23:27:48.643126 kubelet[2519]: E0417 23:27:48.643087 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.643126 kubelet[2519]: W0417 23:27:48.643109 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.643262 kubelet[2519]: E0417 23:27:48.643130 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.663836 kubelet[2519]: E0417 23:27:48.663803 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.663836 kubelet[2519]: W0417 23:27:48.663826 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.663836 kubelet[2519]: E0417 23:27:48.663846 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.687023 containerd[1473]: time="2026-04-17T23:27:48.686903968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db95c4984-d6vwz,Uid:41cd8df0-f96b-46c9-a778-f016e59f0bdd,Namespace:calico-system,Attempt:0,}" Apr 17 23:27:48.713994 containerd[1473]: time="2026-04-17T23:27:48.713618398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:27:48.714278 containerd[1473]: time="2026-04-17T23:27:48.713749618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:27:48.714278 containerd[1473]: time="2026-04-17T23:27:48.713773715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:48.714278 containerd[1473]: time="2026-04-17T23:27:48.713901464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:48.737624 kubelet[2519]: E0417 23:27:48.737560 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.737624 kubelet[2519]: W0417 23:27:48.737600 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.737809 kubelet[2519]: E0417 23:27:48.737649 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.738112 kubelet[2519]: E0417 23:27:48.738050 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.738208 kubelet[2519]: W0417 23:27:48.738113 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.738208 kubelet[2519]: E0417 23:27:48.738137 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.738772 kubelet[2519]: E0417 23:27:48.738721 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.738772 kubelet[2519]: W0417 23:27:48.738751 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.738874 kubelet[2519]: E0417 23:27:48.738791 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.739326 kubelet[2519]: E0417 23:27:48.739300 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.739403 kubelet[2519]: W0417 23:27:48.739334 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.739403 kubelet[2519]: E0417 23:27:48.739361 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.739808 systemd[1]: Started cri-containerd-1bc63ec20a6e2ce214e6212e8f9e30faa48388e440721d2ab9ffbc3d577b26e3.scope - libcontainer container 1bc63ec20a6e2ce214e6212e8f9e30faa48388e440721d2ab9ffbc3d577b26e3. Apr 17 23:27:48.741481 kubelet[2519]: E0417 23:27:48.740673 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.741481 kubelet[2519]: W0417 23:27:48.740729 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.741481 kubelet[2519]: E0417 23:27:48.740750 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.741647 kubelet[2519]: E0417 23:27:48.741494 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.741647 kubelet[2519]: W0417 23:27:48.741522 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.741647 kubelet[2519]: E0417 23:27:48.741544 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.742149 kubelet[2519]: E0417 23:27:48.742113 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.742788 kubelet[2519]: W0417 23:27:48.742144 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.742788 kubelet[2519]: E0417 23:27:48.742760 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.743271 kubelet[2519]: E0417 23:27:48.743034 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.743271 kubelet[2519]: W0417 23:27:48.743081 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.743271 kubelet[2519]: E0417 23:27:48.743093 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.743553 kubelet[2519]: E0417 23:27:48.743388 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.743553 kubelet[2519]: W0417 23:27:48.743418 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.743553 kubelet[2519]: E0417 23:27:48.743429 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.743743 kubelet[2519]: E0417 23:27:48.743720 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.743791 kubelet[2519]: W0417 23:27:48.743755 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.743791 kubelet[2519]: E0417 23:27:48.743768 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.744030 kubelet[2519]: E0417 23:27:48.743980 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.744030 kubelet[2519]: W0417 23:27:48.743994 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.744030 kubelet[2519]: E0417 23:27:48.744004 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.744726 kubelet[2519]: E0417 23:27:48.744240 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.744726 kubelet[2519]: W0417 23:27:48.744254 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.744726 kubelet[2519]: E0417 23:27:48.744263 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.744726 kubelet[2519]: E0417 23:27:48.744508 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.744726 kubelet[2519]: W0417 23:27:48.744518 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.744726 kubelet[2519]: E0417 23:27:48.744528 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.744899 kubelet[2519]: E0417 23:27:48.744738 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.744899 kubelet[2519]: W0417 23:27:48.744748 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.744899 kubelet[2519]: E0417 23:27:48.744756 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.744972 kubelet[2519]: E0417 23:27:48.744945 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.744972 kubelet[2519]: W0417 23:27:48.744955 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.744972 kubelet[2519]: E0417 23:27:48.744963 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.746372 kubelet[2519]: E0417 23:27:48.745282 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.746372 kubelet[2519]: W0417 23:27:48.745297 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.746372 kubelet[2519]: E0417 23:27:48.745309 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.746372 kubelet[2519]: E0417 23:27:48.745497 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.746372 kubelet[2519]: W0417 23:27:48.745506 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.746372 kubelet[2519]: E0417 23:27:48.745515 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.746372 kubelet[2519]: E0417 23:27:48.745671 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.746372 kubelet[2519]: W0417 23:27:48.745679 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.746372 kubelet[2519]: E0417 23:27:48.745687 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.746372 kubelet[2519]: E0417 23:27:48.745842 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.746833 kubelet[2519]: W0417 23:27:48.745849 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.746833 kubelet[2519]: E0417 23:27:48.745856 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.746833 kubelet[2519]: E0417 23:27:48.746008 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.746833 kubelet[2519]: W0417 23:27:48.746016 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.746833 kubelet[2519]: E0417 23:27:48.746024 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.746833 kubelet[2519]: E0417 23:27:48.746269 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.746833 kubelet[2519]: W0417 23:27:48.746281 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.746833 kubelet[2519]: E0417 23:27:48.746291 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.746833 kubelet[2519]: E0417 23:27:48.746775 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.746833 kubelet[2519]: W0417 23:27:48.746786 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.747409 kubelet[2519]: E0417 23:27:48.746797 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.747409 kubelet[2519]: E0417 23:27:48.746964 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.747409 kubelet[2519]: W0417 23:27:48.746974 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.747409 kubelet[2519]: E0417 23:27:48.746981 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.747960 kubelet[2519]: E0417 23:27:48.747628 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.747960 kubelet[2519]: W0417 23:27:48.747649 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.747960 kubelet[2519]: E0417 23:27:48.747661 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.747960 kubelet[2519]: E0417 23:27:48.747894 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.747960 kubelet[2519]: W0417 23:27:48.747902 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.747960 kubelet[2519]: E0417 23:27:48.747912 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.765739 kubelet[2519]: E0417 23:27:48.765699 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:48.765739 kubelet[2519]: W0417 23:27:48.765728 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:48.765885 kubelet[2519]: E0417 23:27:48.765764 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:48.783905 containerd[1473]: time="2026-04-17T23:27:48.783847842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db95c4984-d6vwz,Uid:41cd8df0-f96b-46c9-a778-f016e59f0bdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bc63ec20a6e2ce214e6212e8f9e30faa48388e440721d2ab9ffbc3d577b26e3\"" Apr 17 23:27:48.786651 containerd[1473]: time="2026-04-17T23:27:48.786438826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:27:48.792513 containerd[1473]: time="2026-04-17T23:27:48.792161092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v75dm,Uid:0c061ef6-aeca-4b7f-b9b7-e336da0ed154,Namespace:calico-system,Attempt:0,}" Apr 17 23:27:48.824531 containerd[1473]: time="2026-04-17T23:27:48.823487603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:27:48.824531 containerd[1473]: time="2026-04-17T23:27:48.823615917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:27:48.824531 containerd[1473]: time="2026-04-17T23:27:48.823632466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:48.824531 containerd[1473]: time="2026-04-17T23:27:48.823767359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:27:48.845271 systemd[1]: Started cri-containerd-ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268.scope - libcontainer container ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268. Apr 17 23:27:48.874150 containerd[1473]: time="2026-04-17T23:27:48.873937268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v75dm,Uid:0c061ef6-aeca-4b7f-b9b7-e336da0ed154,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268\"" Apr 17 23:27:50.448457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1992238106.mount: Deactivated successfully. Apr 17 23:27:50.979276 kubelet[2519]: E0417 23:27:50.979179 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:27:51.560314 containerd[1473]: time="2026-04-17T23:27:51.560153017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:51.562153 containerd[1473]: time="2026-04-17T23:27:51.562118938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 17 23:27:51.563550 containerd[1473]: time="2026-04-17T23:27:51.563466602Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:51.566267 containerd[1473]: time="2026-04-17T23:27:51.566189094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:51.567119 containerd[1473]: time="2026-04-17T23:27:51.567054846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.780550023s" Apr 17 23:27:51.567119 containerd[1473]: time="2026-04-17T23:27:51.567112072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 17 23:27:51.572814 containerd[1473]: time="2026-04-17T23:27:51.571597177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:27:51.591745 containerd[1473]: time="2026-04-17T23:27:51.591700264Z" level=info msg="CreateContainer within sandbox \"1bc63ec20a6e2ce214e6212e8f9e30faa48388e440721d2ab9ffbc3d577b26e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:27:51.610899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3331753112.mount: Deactivated successfully. Apr 17 23:27:51.616086 containerd[1473]: time="2026-04-17T23:27:51.615840187Z" level=info msg="CreateContainer within sandbox \"1bc63ec20a6e2ce214e6212e8f9e30faa48388e440721d2ab9ffbc3d577b26e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f475aaa2e8d7776913cc1c50abcfce906fb42e8a806e99f4b8a136240937e92\"" Apr 17 23:27:51.617658 containerd[1473]: time="2026-04-17T23:27:51.617424079Z" level=info msg="StartContainer for \"9f475aaa2e8d7776913cc1c50abcfce906fb42e8a806e99f4b8a136240937e92\"" Apr 17 23:27:51.646287 systemd[1]: Started cri-containerd-9f475aaa2e8d7776913cc1c50abcfce906fb42e8a806e99f4b8a136240937e92.scope - libcontainer container 9f475aaa2e8d7776913cc1c50abcfce906fb42e8a806e99f4b8a136240937e92. Apr 17 23:27:51.685370 containerd[1473]: time="2026-04-17T23:27:51.685320915Z" level=info msg="StartContainer for \"9f475aaa2e8d7776913cc1c50abcfce906fb42e8a806e99f4b8a136240937e92\" returns successfully" Apr 17 23:27:52.135987 kubelet[2519]: E0417 23:27:52.135836 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.135987 kubelet[2519]: W0417 23:27:52.135879 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.135987 kubelet[2519]: E0417 23:27:52.135971 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.137165 kubelet[2519]: E0417 23:27:52.136498 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.137165 kubelet[2519]: W0417 23:27:52.136515 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.137165 kubelet[2519]: E0417 23:27:52.136554 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.137165 kubelet[2519]: E0417 23:27:52.136849 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.137165 kubelet[2519]: W0417 23:27:52.136862 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.137165 kubelet[2519]: E0417 23:27:52.136876 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.137547 kubelet[2519]: E0417 23:27:52.137214 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.137547 kubelet[2519]: W0417 23:27:52.137252 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.137547 kubelet[2519]: E0417 23:27:52.137284 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.137748 kubelet[2519]: E0417 23:27:52.137611 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.137748 kubelet[2519]: W0417 23:27:52.137624 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.137748 kubelet[2519]: E0417 23:27:52.137638 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.138110 kubelet[2519]: E0417 23:27:52.137935 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.138110 kubelet[2519]: W0417 23:27:52.137949 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.138110 kubelet[2519]: E0417 23:27:52.137973 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.138323 kubelet[2519]: E0417 23:27:52.138311 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.138432 kubelet[2519]: W0417 23:27:52.138325 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.138432 kubelet[2519]: E0417 23:27:52.138341 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.138601 kubelet[2519]: E0417 23:27:52.138568 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.138601 kubelet[2519]: W0417 23:27:52.138585 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.138601 kubelet[2519]: E0417 23:27:52.138598 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.138841 kubelet[2519]: E0417 23:27:52.138818 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.138841 kubelet[2519]: W0417 23:27:52.138833 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.139155 kubelet[2519]: E0417 23:27:52.138846 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.139155 kubelet[2519]: E0417 23:27:52.139104 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.139155 kubelet[2519]: W0417 23:27:52.139116 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.139155 kubelet[2519]: E0417 23:27:52.139129 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.139347 kubelet[2519]: E0417 23:27:52.139327 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.139347 kubelet[2519]: W0417 23:27:52.139338 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.139438 kubelet[2519]: E0417 23:27:52.139350 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.139575 kubelet[2519]: E0417 23:27:52.139559 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.139575 kubelet[2519]: W0417 23:27:52.139574 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.139575 kubelet[2519]: E0417 23:27:52.139587 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.139852 kubelet[2519]: E0417 23:27:52.139801 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.139852 kubelet[2519]: W0417 23:27:52.139812 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.139852 kubelet[2519]: E0417 23:27:52.139823 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.140286 kubelet[2519]: E0417 23:27:52.140018 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.140286 kubelet[2519]: W0417 23:27:52.140043 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.140286 kubelet[2519]: E0417 23:27:52.140055 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.140391 kubelet[2519]: E0417 23:27:52.140307 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.140391 kubelet[2519]: W0417 23:27:52.140319 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.140391 kubelet[2519]: E0417 23:27:52.140331 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.166318 kubelet[2519]: E0417 23:27:52.165978 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.166318 kubelet[2519]: W0417 23:27:52.166011 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.166318 kubelet[2519]: E0417 23:27:52.166047 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.166745 kubelet[2519]: E0417 23:27:52.166721 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.166972 kubelet[2519]: W0417 23:27:52.166849 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.166972 kubelet[2519]: E0417 23:27:52.166896 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.167478 kubelet[2519]: E0417 23:27:52.167440 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.167573 kubelet[2519]: W0417 23:27:52.167493 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.167573 kubelet[2519]: E0417 23:27:52.167519 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.168054 kubelet[2519]: E0417 23:27:52.168013 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.168054 kubelet[2519]: W0417 23:27:52.168044 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.168160 kubelet[2519]: E0417 23:27:52.168095 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.168463 kubelet[2519]: E0417 23:27:52.168440 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.168504 kubelet[2519]: W0417 23:27:52.168466 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.168504 kubelet[2519]: E0417 23:27:52.168486 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.168812 kubelet[2519]: E0417 23:27:52.168792 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.168860 kubelet[2519]: W0417 23:27:52.168815 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.168860 kubelet[2519]: E0417 23:27:52.168834 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.169355 kubelet[2519]: E0417 23:27:52.169333 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.169413 kubelet[2519]: W0417 23:27:52.169358 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.169413 kubelet[2519]: E0417 23:27:52.169379 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.169786 kubelet[2519]: E0417 23:27:52.169768 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.169830 kubelet[2519]: W0417 23:27:52.169791 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.169830 kubelet[2519]: E0417 23:27:52.169810 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.170242 kubelet[2519]: E0417 23:27:52.170201 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.170242 kubelet[2519]: W0417 23:27:52.170229 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.170325 kubelet[2519]: E0417 23:27:52.170251 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.170672 kubelet[2519]: E0417 23:27:52.170653 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.170718 kubelet[2519]: W0417 23:27:52.170676 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.170718 kubelet[2519]: E0417 23:27:52.170696 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.171132 kubelet[2519]: E0417 23:27:52.171110 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.171184 kubelet[2519]: W0417 23:27:52.171135 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.171184 kubelet[2519]: E0417 23:27:52.171156 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.171655 kubelet[2519]: E0417 23:27:52.171633 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.171704 kubelet[2519]: W0417 23:27:52.171660 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.171704 kubelet[2519]: E0417 23:27:52.171680 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.172107 kubelet[2519]: E0417 23:27:52.172053 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.172151 kubelet[2519]: W0417 23:27:52.172109 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.172151 kubelet[2519]: E0417 23:27:52.172129 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.172408 kubelet[2519]: E0417 23:27:52.172391 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.172447 kubelet[2519]: W0417 23:27:52.172410 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.172447 kubelet[2519]: E0417 23:27:52.172425 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.172677 kubelet[2519]: E0417 23:27:52.172659 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.172718 kubelet[2519]: W0417 23:27:52.172678 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.172718 kubelet[2519]: E0417 23:27:52.172692 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.173163 kubelet[2519]: E0417 23:27:52.173045 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.173163 kubelet[2519]: W0417 23:27:52.173106 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.173163 kubelet[2519]: E0417 23:27:52.173120 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.174109 kubelet[2519]: E0417 23:27:52.173962 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.174109 kubelet[2519]: W0417 23:27:52.173990 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.174109 kubelet[2519]: E0417 23:27:52.174010 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.179097 kubelet[2519]: E0417 23:27:52.177309 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:52.179097 kubelet[2519]: W0417 23:27:52.177326 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:52.179097 kubelet[2519]: E0417 23:27:52.177342 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:52.979291 kubelet[2519]: E0417 23:27:52.979233 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:27:53.114085 kubelet[2519]: I0417 23:27:53.113797 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:27:53.146810 kubelet[2519]: E0417 23:27:53.146783 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.147186 kubelet[2519]: W0417 23:27:53.146804 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.147186 kubelet[2519]: E0417 23:27:53.146876 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.147186 kubelet[2519]: E0417 23:27:53.147088 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.147186 kubelet[2519]: W0417 23:27:53.147098 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.147186 kubelet[2519]: E0417 23:27:53.147107 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.147333 kubelet[2519]: E0417 23:27:53.147311 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.147333 kubelet[2519]: W0417 23:27:53.147328 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.147387 kubelet[2519]: E0417 23:27:53.147340 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.147752 kubelet[2519]: E0417 23:27:53.147724 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.147752 kubelet[2519]: W0417 23:27:53.147751 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.147820 kubelet[2519]: E0417 23:27:53.147767 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.148142 kubelet[2519]: E0417 23:27:53.148126 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.148212 kubelet[2519]: W0417 23:27:53.148175 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.148212 kubelet[2519]: E0417 23:27:53.148189 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.148586 kubelet[2519]: E0417 23:27:53.148570 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.148586 kubelet[2519]: W0417 23:27:53.148585 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.148660 kubelet[2519]: E0417 23:27:53.148597 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.148805 kubelet[2519]: E0417 23:27:53.148794 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.148840 kubelet[2519]: W0417 23:27:53.148805 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.148840 kubelet[2519]: E0417 23:27:53.148814 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.148986 kubelet[2519]: E0417 23:27:53.148974 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.149022 kubelet[2519]: W0417 23:27:53.148986 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.149022 kubelet[2519]: E0417 23:27:53.149004 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.149192 kubelet[2519]: E0417 23:27:53.149181 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.149192 kubelet[2519]: W0417 23:27:53.149192 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.149493 kubelet[2519]: E0417 23:27:53.149200 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.149635 kubelet[2519]: E0417 23:27:53.149614 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.149679 kubelet[2519]: W0417 23:27:53.149652 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.149679 kubelet[2519]: E0417 23:27:53.149666 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.149949 kubelet[2519]: E0417 23:27:53.149934 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.149949 kubelet[2519]: W0417 23:27:53.149948 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.150127 kubelet[2519]: E0417 23:27:53.150038 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.150456 kubelet[2519]: E0417 23:27:53.150441 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.150456 kubelet[2519]: W0417 23:27:53.150455 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.150541 kubelet[2519]: E0417 23:27:53.150466 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.150691 kubelet[2519]: E0417 23:27:53.150677 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.150691 kubelet[2519]: W0417 23:27:53.150690 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.150767 kubelet[2519]: E0417 23:27:53.150699 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.150867 kubelet[2519]: E0417 23:27:53.150856 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.150867 kubelet[2519]: W0417 23:27:53.150866 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.150933 kubelet[2519]: E0417 23:27:53.150874 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.151036 kubelet[2519]: E0417 23:27:53.151026 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.151036 kubelet[2519]: W0417 23:27:53.151036 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.151138 kubelet[2519]: E0417 23:27:53.151044 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.177856 kubelet[2519]: E0417 23:27:53.177828 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.178404 kubelet[2519]: W0417 23:27:53.178097 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.178404 kubelet[2519]: E0417 23:27:53.178126 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.179343 kubelet[2519]: E0417 23:27:53.179179 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.179343 kubelet[2519]: W0417 23:27:53.179196 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.179343 kubelet[2519]: E0417 23:27:53.179214 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.179836 kubelet[2519]: E0417 23:27:53.179476 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.179836 kubelet[2519]: W0417 23:27:53.179487 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.179836 kubelet[2519]: E0417 23:27:53.179497 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.179836 kubelet[2519]: E0417 23:27:53.179708 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.179836 kubelet[2519]: W0417 23:27:53.179717 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.179836 kubelet[2519]: E0417 23:27:53.179726 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.180311 kubelet[2519]: E0417 23:27:53.180197 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.180311 kubelet[2519]: W0417 23:27:53.180209 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.180311 kubelet[2519]: E0417 23:27:53.180222 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.180942 kubelet[2519]: E0417 23:27:53.180861 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.180942 kubelet[2519]: W0417 23:27:53.180875 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.180942 kubelet[2519]: E0417 23:27:53.180887 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.181458 kubelet[2519]: E0417 23:27:53.181232 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.181458 kubelet[2519]: W0417 23:27:53.181246 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.181458 kubelet[2519]: E0417 23:27:53.181258 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.181705 kubelet[2519]: E0417 23:27:53.181632 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.181705 kubelet[2519]: W0417 23:27:53.181644 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.181705 kubelet[2519]: E0417 23:27:53.181655 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.182044 kubelet[2519]: E0417 23:27:53.181950 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.182044 kubelet[2519]: W0417 23:27:53.181961 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.182044 kubelet[2519]: E0417 23:27:53.181972 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.182729 kubelet[2519]: E0417 23:27:53.182712 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.183019 kubelet[2519]: W0417 23:27:53.182808 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.183019 kubelet[2519]: E0417 23:27:53.182826 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.183499 kubelet[2519]: E0417 23:27:53.183468 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.183544 kubelet[2519]: W0417 23:27:53.183504 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.183544 kubelet[2519]: E0417 23:27:53.183531 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.184040 kubelet[2519]: E0417 23:27:53.184019 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.184126 kubelet[2519]: W0417 23:27:53.184045 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.184126 kubelet[2519]: E0417 23:27:53.184088 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.185011 kubelet[2519]: E0417 23:27:53.184986 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.185116 kubelet[2519]: W0417 23:27:53.185014 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.185116 kubelet[2519]: E0417 23:27:53.185037 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.185579 kubelet[2519]: E0417 23:27:53.185558 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.185683 kubelet[2519]: W0417 23:27:53.185582 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.185683 kubelet[2519]: E0417 23:27:53.185604 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.186388 kubelet[2519]: E0417 23:27:53.186364 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.186504 kubelet[2519]: W0417 23:27:53.186391 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.186504 kubelet[2519]: E0417 23:27:53.186413 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.187421 kubelet[2519]: E0417 23:27:53.187290 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.187421 kubelet[2519]: W0417 23:27:53.187322 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.187421 kubelet[2519]: E0417 23:27:53.187340 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.188402 kubelet[2519]: E0417 23:27:53.188128 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.188402 kubelet[2519]: W0417 23:27:53.188143 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.188402 kubelet[2519]: E0417 23:27:53.188154 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.188402 kubelet[2519]: E0417 23:27:53.188352 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:27:53.188402 kubelet[2519]: W0417 23:27:53.188361 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:27:53.188402 kubelet[2519]: E0417 23:27:53.188370 2519 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:27:53.243823 containerd[1473]: time="2026-04-17T23:27:53.243641599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:53.246209 containerd[1473]: time="2026-04-17T23:27:53.245860038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 17 23:27:53.248697 containerd[1473]: time="2026-04-17T23:27:53.247391212Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:53.251152 containerd[1473]: time="2026-04-17T23:27:53.251102207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:27:53.252564 containerd[1473]: time="2026-04-17T23:27:53.252385151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.680719816s" Apr 17 23:27:53.252564 containerd[1473]: time="2026-04-17T23:27:53.252423047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 17 23:27:53.257823 containerd[1473]: time="2026-04-17T23:27:53.257736948Z" level=info msg="CreateContainer within sandbox \"ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:27:53.276188 containerd[1473]: time="2026-04-17T23:27:53.276012667Z" level=info msg="CreateContainer within sandbox \"ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"64d76d49e48c6e88ca5f082cb8635d0d14838bb5572e67967e893ee6188f4e7e\"" Apr 17 23:27:53.279114 containerd[1473]: time="2026-04-17T23:27:53.277249277Z" level=info msg="StartContainer for \"64d76d49e48c6e88ca5f082cb8635d0d14838bb5572e67967e893ee6188f4e7e\"" Apr 17 23:27:53.307251 systemd[1]: Started cri-containerd-64d76d49e48c6e88ca5f082cb8635d0d14838bb5572e67967e893ee6188f4e7e.scope - libcontainer container 64d76d49e48c6e88ca5f082cb8635d0d14838bb5572e67967e893ee6188f4e7e. Apr 17 23:27:53.346036 containerd[1473]: time="2026-04-17T23:27:53.345907920Z" level=info msg="StartContainer for \"64d76d49e48c6e88ca5f082cb8635d0d14838bb5572e67967e893ee6188f4e7e\" returns successfully" Apr 17 23:27:53.363266 systemd[1]: cri-containerd-64d76d49e48c6e88ca5f082cb8635d0d14838bb5572e67967e893ee6188f4e7e.scope: Deactivated successfully. Apr 17 23:27:53.387700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64d76d49e48c6e88ca5f082cb8635d0d14838bb5572e67967e893ee6188f4e7e-rootfs.mount: Deactivated successfully. Apr 17 23:27:53.519876 containerd[1473]: time="2026-04-17T23:27:53.519491363Z" level=info msg="shim disconnected" id=64d76d49e48c6e88ca5f082cb8635d0d14838bb5572e67967e893ee6188f4e7e namespace=k8s.io Apr 17 23:27:53.519876 containerd[1473]: time="2026-04-17T23:27:53.519581501Z" level=warning msg="cleaning up after shim disconnected" id=64d76d49e48c6e88ca5f082cb8635d0d14838bb5572e67967e893ee6188f4e7e namespace=k8s.io Apr 17 23:27:53.519876 containerd[1473]: time="2026-04-17T23:27:53.519596731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:27:54.121892 containerd[1473]: time="2026-04-17T23:27:54.121563859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:27:54.145024 kubelet[2519]: I0417 23:27:54.144763 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-5db95c4984-d6vwz" podStartSLOduration=3.360729927 podStartE2EDuration="6.144745257s" podCreationTimestamp="2026-04-17 23:27:48 +0000 UTC" firstStartedPulling="2026-04-17 23:27:48.785745228 +0000 UTC m=+22.941567669" lastFinishedPulling="2026-04-17 23:27:51.569760477 +0000 UTC m=+25.725582999" observedRunningTime="2026-04-17 23:27:52.125225498 +0000 UTC m=+26.281047979" watchObservedRunningTime="2026-04-17 23:27:54.144745257 +0000 UTC m=+28.300567738" Apr 17 23:27:54.979829 kubelet[2519]: E0417 23:27:54.979292 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:27:56.978768 kubelet[2519]: E0417 23:27:56.978544 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:27:58.978322 kubelet[2519]: E0417 23:27:58.978076 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:28:00.979035 kubelet[2519]: E0417 23:28:00.978512 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:28:01.118971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227062676.mount: Deactivated successfully. Apr 17 23:28:01.145229 containerd[1473]: time="2026-04-17T23:28:01.145163280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:01.146432 containerd[1473]: time="2026-04-17T23:28:01.146404142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 17 23:28:01.148083 containerd[1473]: time="2026-04-17T23:28:01.148017679Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:01.152379 containerd[1473]: time="2026-04-17T23:28:01.152321952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:01.153154 containerd[1473]: time="2026-04-17T23:28:01.153121229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 7.031514942s" Apr 17 23:28:01.153260 containerd[1473]: time="2026-04-17T23:28:01.153244105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 17 23:28:01.159380 containerd[1473]: time="2026-04-17T23:28:01.158837357Z" level=info msg="CreateContainer within sandbox \"ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:28:01.177223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125091136.mount: Deactivated successfully. Apr 17 23:28:01.189072 containerd[1473]: time="2026-04-17T23:28:01.188994035Z" level=info msg="CreateContainer within sandbox \"ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"88e5129fc1898661df7a157df241a523a9f4efc2f8ccec2aa35d44eb7c8638b8\"" Apr 17 23:28:01.190550 containerd[1473]: time="2026-04-17T23:28:01.190392709Z" level=info msg="StartContainer for \"88e5129fc1898661df7a157df241a523a9f4efc2f8ccec2aa35d44eb7c8638b8\"" Apr 17 23:28:01.228315 systemd[1]: Started cri-containerd-88e5129fc1898661df7a157df241a523a9f4efc2f8ccec2aa35d44eb7c8638b8.scope - libcontainer container 88e5129fc1898661df7a157df241a523a9f4efc2f8ccec2aa35d44eb7c8638b8. Apr 17 23:28:01.258285 containerd[1473]: time="2026-04-17T23:28:01.258157966Z" level=info msg="StartContainer for \"88e5129fc1898661df7a157df241a523a9f4efc2f8ccec2aa35d44eb7c8638b8\" returns successfully" Apr 17 23:28:01.358805 systemd[1]: cri-containerd-88e5129fc1898661df7a157df241a523a9f4efc2f8ccec2aa35d44eb7c8638b8.scope: Deactivated successfully. Apr 17 23:28:01.541525 containerd[1473]: time="2026-04-17T23:28:01.541363147Z" level=info msg="shim disconnected" id=88e5129fc1898661df7a157df241a523a9f4efc2f8ccec2aa35d44eb7c8638b8 namespace=k8s.io Apr 17 23:28:01.541525 containerd[1473]: time="2026-04-17T23:28:01.541422001Z" level=warning msg="cleaning up after shim disconnected" id=88e5129fc1898661df7a157df241a523a9f4efc2f8ccec2aa35d44eb7c8638b8 namespace=k8s.io Apr 17 23:28:01.541525 containerd[1473]: time="2026-04-17T23:28:01.541431736Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:28:02.119497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88e5129fc1898661df7a157df241a523a9f4efc2f8ccec2aa35d44eb7c8638b8-rootfs.mount: Deactivated successfully. Apr 17 23:28:02.144150 containerd[1473]: time="2026-04-17T23:28:02.144092719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:28:02.979352 kubelet[2519]: E0417 23:28:02.979230 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:28:04.978739 kubelet[2519]: E0417 23:28:04.978319 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:28:05.072439 kubelet[2519]: I0417 23:28:05.071685 2519 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:28:06.053015 containerd[1473]: time="2026-04-17T23:28:06.052932932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:06.054982 containerd[1473]: time="2026-04-17T23:28:06.054909393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 17 23:28:06.055942 containerd[1473]: time="2026-04-17T23:28:06.055906650Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:06.059556 containerd[1473]: time="2026-04-17T23:28:06.059521533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:06.060239 containerd[1473]: time="2026-04-17T23:28:06.060201411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.916067119s" Apr 17 23:28:06.060239 containerd[1473]: time="2026-04-17T23:28:06.060234918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 17 23:28:06.066648 containerd[1473]: time="2026-04-17T23:28:06.066405136Z" level=info msg="CreateContainer within sandbox \"ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:28:06.089555 containerd[1473]: time="2026-04-17T23:28:06.089382691Z" level=info msg="CreateContainer within sandbox \"ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fd3d48300766bb7fdac9700872f5cc3a5b53381d638de82ab5144767e87ace77\"" Apr 17 23:28:06.090648 containerd[1473]: time="2026-04-17T23:28:06.090461135Z" level=info msg="StartContainer for \"fd3d48300766bb7fdac9700872f5cc3a5b53381d638de82ab5144767e87ace77\"" Apr 17 23:28:06.129361 systemd[1]: Started cri-containerd-fd3d48300766bb7fdac9700872f5cc3a5b53381d638de82ab5144767e87ace77.scope - libcontainer container fd3d48300766bb7fdac9700872f5cc3a5b53381d638de82ab5144767e87ace77. Apr 17 23:28:06.164949 containerd[1473]: time="2026-04-17T23:28:06.164673889Z" level=info msg="StartContainer for \"fd3d48300766bb7fdac9700872f5cc3a5b53381d638de82ab5144767e87ace77\" returns successfully" Apr 17 23:28:06.724682 containerd[1473]: time="2026-04-17T23:28:06.724608879Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:28:06.730033 systemd[1]: cri-containerd-fd3d48300766bb7fdac9700872f5cc3a5b53381d638de82ab5144767e87ace77.scope: Deactivated successfully. Apr 17 23:28:06.755687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd3d48300766bb7fdac9700872f5cc3a5b53381d638de82ab5144767e87ace77-rootfs.mount: Deactivated successfully. Apr 17 23:28:06.788560 kubelet[2519]: I0417 23:28:06.787545 2519 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 17 23:28:06.804269 containerd[1473]: time="2026-04-17T23:28:06.804021375Z" level=info msg="shim disconnected" id=fd3d48300766bb7fdac9700872f5cc3a5b53381d638de82ab5144767e87ace77 namespace=k8s.io Apr 17 23:28:06.804269 containerd[1473]: time="2026-04-17T23:28:06.804091753Z" level=warning msg="cleaning up after shim disconnected" id=fd3d48300766bb7fdac9700872f5cc3a5b53381d638de82ab5144767e87ace77 namespace=k8s.io Apr 17 23:28:06.804269 containerd[1473]: time="2026-04-17T23:28:06.804102802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:28:06.846757 systemd[1]: Created slice kubepods-besteffort-pod6cae17b8_aaee_4419_851b_914e2d779768.slice - libcontainer container kubepods-besteffort-pod6cae17b8_aaee_4419_851b_914e2d779768.slice. Apr 17 23:28:06.863400 systemd[1]: Created slice kubepods-besteffort-podf88cc04e_7a14_485a_8daa_e700592ac36b.slice - libcontainer container kubepods-besteffort-podf88cc04e_7a14_485a_8daa_e700592ac36b.slice. Apr 17 23:28:06.871126 kubelet[2519]: I0417 23:28:06.870984 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cae17b8-aaee-4419-851b-914e2d779768-tigera-ca-bundle\") pod \"calico-kube-controllers-7649d4fc56-khl48\" (UID: \"6cae17b8-aaee-4419-851b-914e2d779768\") " pod="calico-system/calico-kube-controllers-7649d4fc56-khl48" Apr 17 23:28:06.871126 kubelet[2519]: I0417 23:28:06.871024 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8vxv\" (UniqueName: \"kubernetes.io/projected/6cae17b8-aaee-4419-851b-914e2d779768-kube-api-access-r8vxv\") pod \"calico-kube-controllers-7649d4fc56-khl48\" (UID: \"6cae17b8-aaee-4419-851b-914e2d779768\") " pod="calico-system/calico-kube-controllers-7649d4fc56-khl48" Apr 17 23:28:06.874325 systemd[1]: Created slice kubepods-burstable-pod671ffde3_921c_4e59_9f78_76ef9c5efeb2.slice - libcontainer container kubepods-burstable-pod671ffde3_921c_4e59_9f78_76ef9c5efeb2.slice. Apr 17 23:28:06.881419 systemd[1]: Created slice kubepods-besteffort-pod4bad2253_83ea_4317_8f1d_4aea885a0488.slice - libcontainer container kubepods-besteffort-pod4bad2253_83ea_4317_8f1d_4aea885a0488.slice. Apr 17 23:28:06.891713 systemd[1]: Created slice kubepods-burstable-pod3c075d03_ef84_4fae_a46f_26a1d184ca06.slice - libcontainer container kubepods-burstable-pod3c075d03_ef84_4fae_a46f_26a1d184ca06.slice. Apr 17 23:28:06.899106 systemd[1]: Created slice kubepods-besteffort-poded934d57_c244_432a_8985_874dc75eb161.slice - libcontainer container kubepods-besteffort-poded934d57_c244_432a_8985_874dc75eb161.slice. Apr 17 23:28:06.910427 systemd[1]: Created slice kubepods-besteffort-podc4151f14_8e49_44f5_8af3_2ec8c78a0e0e.slice - libcontainer container kubepods-besteffort-podc4151f14_8e49_44f5_8af3_2ec8c78a0e0e.slice. Apr 17 23:28:06.972015 kubelet[2519]: I0417 23:28:06.971951 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srbkz\" (UniqueName: \"kubernetes.io/projected/f88cc04e-7a14-485a-8daa-e700592ac36b-kube-api-access-srbkz\") pod \"calico-apiserver-6f8ddd65ff-cgvqb\" (UID: \"f88cc04e-7a14-485a-8daa-e700592ac36b\") " pod="calico-system/calico-apiserver-6f8ddd65ff-cgvqb" Apr 17 23:28:06.972015 kubelet[2519]: I0417 23:28:06.971998 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-backend-key-pair\") pod \"whisker-585fd75c68-2fwmw\" (UID: \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\") " pod="calico-system/whisker-585fd75c68-2fwmw" Apr 17 23:28:06.972015 kubelet[2519]: I0417 23:28:06.972018 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4xv\" (UniqueName: \"kubernetes.io/projected/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-kube-api-access-ft4xv\") pod \"whisker-585fd75c68-2fwmw\" (UID: \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\") " pod="calico-system/whisker-585fd75c68-2fwmw" Apr 17 23:28:06.972707 kubelet[2519]: I0417 23:28:06.972052 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/671ffde3-921c-4e59-9f78-76ef9c5efeb2-config-volume\") pod \"coredns-7d764666f9-7878s\" (UID: \"671ffde3-921c-4e59-9f78-76ef9c5efeb2\") " pod="kube-system/coredns-7d764666f9-7878s" Apr 17 23:28:06.972707 kubelet[2519]: I0417 23:28:06.972087 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bad2253-83ea-4317-8f1d-4aea885a0488-config\") pod \"goldmane-9f7667bb8-ptlpv\" (UID: \"4bad2253-83ea-4317-8f1d-4aea885a0488\") " pod="calico-system/goldmane-9f7667bb8-ptlpv" Apr 17 23:28:06.972707 kubelet[2519]: I0417 23:28:06.972107 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ed934d57-c244-432a-8985-874dc75eb161-calico-apiserver-certs\") pod \"calico-apiserver-6f8ddd65ff-8w2zs\" (UID: \"ed934d57-c244-432a-8985-874dc75eb161\") " pod="calico-system/calico-apiserver-6f8ddd65ff-8w2zs" Apr 17 23:28:06.972707 kubelet[2519]: I0417 23:28:06.972129 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5llsg\" (UniqueName: \"kubernetes.io/projected/ed934d57-c244-432a-8985-874dc75eb161-kube-api-access-5llsg\") pod \"calico-apiserver-6f8ddd65ff-8w2zs\" (UID: \"ed934d57-c244-432a-8985-874dc75eb161\") " pod="calico-system/calico-apiserver-6f8ddd65ff-8w2zs" Apr 17 23:28:06.972707 kubelet[2519]: I0417 23:28:06.972146 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4bad2253-83ea-4317-8f1d-4aea885a0488-goldmane-key-pair\") pod \"goldmane-9f7667bb8-ptlpv\" (UID: \"4bad2253-83ea-4317-8f1d-4aea885a0488\") " pod="calico-system/goldmane-9f7667bb8-ptlpv" Apr 17 23:28:06.973024 kubelet[2519]: I0417 23:28:06.972163 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqgl\" (UniqueName: \"kubernetes.io/projected/4bad2253-83ea-4317-8f1d-4aea885a0488-kube-api-access-hvqgl\") pod \"goldmane-9f7667bb8-ptlpv\" (UID: \"4bad2253-83ea-4317-8f1d-4aea885a0488\") " pod="calico-system/goldmane-9f7667bb8-ptlpv" Apr 17 23:28:06.973024 kubelet[2519]: I0417 23:28:06.972186 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bad2253-83ea-4317-8f1d-4aea885a0488-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-ptlpv\" (UID: \"4bad2253-83ea-4317-8f1d-4aea885a0488\") " pod="calico-system/goldmane-9f7667bb8-ptlpv" Apr 17 23:28:06.973024 kubelet[2519]: I0417 23:28:06.972204 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-nginx-config\") pod \"whisker-585fd75c68-2fwmw\" (UID: \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\") " pod="calico-system/whisker-585fd75c68-2fwmw" Apr 17 23:28:06.973024 kubelet[2519]: I0417 23:28:06.972222 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-ca-bundle\") pod \"whisker-585fd75c68-2fwmw\" (UID: \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\") " pod="calico-system/whisker-585fd75c68-2fwmw" Apr 17 23:28:06.973024 kubelet[2519]: I0417 23:28:06.972260 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f88cc04e-7a14-485a-8daa-e700592ac36b-calico-apiserver-certs\") pod \"calico-apiserver-6f8ddd65ff-cgvqb\" (UID: \"f88cc04e-7a14-485a-8daa-e700592ac36b\") " pod="calico-system/calico-apiserver-6f8ddd65ff-cgvqb" Apr 17 23:28:06.973326 kubelet[2519]: I0417 23:28:06.972279 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c075d03-ef84-4fae-a46f-26a1d184ca06-config-volume\") pod \"coredns-7d764666f9-6z49m\" (UID: \"3c075d03-ef84-4fae-a46f-26a1d184ca06\") " pod="kube-system/coredns-7d764666f9-6z49m" Apr 17 23:28:06.973326 kubelet[2519]: I0417 23:28:06.972300 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thhcg\" (UniqueName: \"kubernetes.io/projected/3c075d03-ef84-4fae-a46f-26a1d184ca06-kube-api-access-thhcg\") pod \"coredns-7d764666f9-6z49m\" (UID: \"3c075d03-ef84-4fae-a46f-26a1d184ca06\") " pod="kube-system/coredns-7d764666f9-6z49m" Apr 17 23:28:06.973326 kubelet[2519]: I0417 23:28:06.972319 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4qq\" (UniqueName: \"kubernetes.io/projected/671ffde3-921c-4e59-9f78-76ef9c5efeb2-kube-api-access-gn4qq\") pod \"coredns-7d764666f9-7878s\" (UID: \"671ffde3-921c-4e59-9f78-76ef9c5efeb2\") " pod="kube-system/coredns-7d764666f9-7878s" Apr 17 23:28:06.988881 systemd[1]: Created slice kubepods-besteffort-pod35637d90_8ee7_47e3_a79b_be82b4dd0107.slice - libcontainer container kubepods-besteffort-pod35637d90_8ee7_47e3_a79b_be82b4dd0107.slice. Apr 17 23:28:07.001864 containerd[1473]: time="2026-04-17T23:28:07.001812552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h44sh,Uid:35637d90-8ee7-47e3-a79b-be82b4dd0107,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:07.139050 containerd[1473]: time="2026-04-17T23:28:07.138637741Z" level=error msg="Failed to destroy network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.139050 containerd[1473]: time="2026-04-17T23:28:07.138956489Z" level=error msg="encountered an error cleaning up failed sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.139050 containerd[1473]: time="2026-04-17T23:28:07.139005044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h44sh,Uid:35637d90-8ee7-47e3-a79b-be82b4dd0107,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.139927 kubelet[2519]: E0417 23:28:07.139885 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.139985 kubelet[2519]: E0417 23:28:07.139938 2519 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h44sh" Apr 17 23:28:07.139985 kubelet[2519]: E0417 23:28:07.139957 2519 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h44sh" Apr 17 23:28:07.140044 kubelet[2519]: E0417 23:28:07.140002 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h44sh_calico-system(35637d90-8ee7-47e3-a79b-be82b4dd0107)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h44sh_calico-system(35637d90-8ee7-47e3-a79b-be82b4dd0107)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:28:07.161719 containerd[1473]: time="2026-04-17T23:28:07.161671664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7649d4fc56-khl48,Uid:6cae17b8-aaee-4419-851b-914e2d779768,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:07.172402 containerd[1473]: time="2026-04-17T23:28:07.171960844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8ddd65ff-cgvqb,Uid:f88cc04e-7a14-485a-8daa-e700592ac36b,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:07.175003 kubelet[2519]: I0417 23:28:07.174419 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:07.176237 containerd[1473]: time="2026-04-17T23:28:07.175197206Z" level=info msg="StopPodSandbox for \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\"" Apr 17 23:28:07.176237 containerd[1473]: time="2026-04-17T23:28:07.175526362Z" level=info msg="Ensure that sandbox f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b in task-service has been cleanup successfully" Apr 17 23:28:07.181784 containerd[1473]: time="2026-04-17T23:28:07.181733695Z" level=info msg="CreateContainer within sandbox \"ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:28:07.184365 containerd[1473]: time="2026-04-17T23:28:07.184338243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7878s,Uid:671ffde3-921c-4e59-9f78-76ef9c5efeb2,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:07.194130 containerd[1473]: time="2026-04-17T23:28:07.194087036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ptlpv,Uid:4bad2253-83ea-4317-8f1d-4aea885a0488,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:07.197805 containerd[1473]: time="2026-04-17T23:28:07.197761112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6z49m,Uid:3c075d03-ef84-4fae-a46f-26a1d184ca06,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:07.212491 containerd[1473]: time="2026-04-17T23:28:07.211990719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8ddd65ff-8w2zs,Uid:ed934d57-c244-432a-8985-874dc75eb161,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:07.216454 containerd[1473]: time="2026-04-17T23:28:07.216416534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585fd75c68-2fwmw,Uid:c4151f14-8e49-44f5-8af3-2ec8c78a0e0e,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:07.223668 containerd[1473]: time="2026-04-17T23:28:07.223422159Z" level=error msg="StopPodSandbox for \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\" failed" error="failed to destroy network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.224110 kubelet[2519]: E0417 23:28:07.223901 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:07.224110 kubelet[2519]: E0417 23:28:07.223975 2519 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b"} Apr 17 23:28:07.224110 kubelet[2519]: E0417 23:28:07.224034 2519 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35637d90-8ee7-47e3-a79b-be82b4dd0107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:28:07.224520 kubelet[2519]: E0417 23:28:07.224367 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35637d90-8ee7-47e3-a79b-be82b4dd0107\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h44sh" podUID="35637d90-8ee7-47e3-a79b-be82b4dd0107" Apr 17 23:28:07.265309 containerd[1473]: time="2026-04-17T23:28:07.265194684Z" level=info msg="CreateContainer within sandbox \"ffa0bab203d6ee4fc2a24f96f41aed5a1ee7078a2f51c743bbc1dd75d3364268\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"567adcb3aadb1a7cdef8f9c301989ab13ea01f017bc6a94b977a54adb5686214\"" Apr 17 23:28:07.268387 containerd[1473]: time="2026-04-17T23:28:07.268136834Z" level=info msg="StartContainer for \"567adcb3aadb1a7cdef8f9c301989ab13ea01f017bc6a94b977a54adb5686214\"" Apr 17 23:28:07.327196 containerd[1473]: time="2026-04-17T23:28:07.327040328Z" level=error msg="Failed to destroy network for sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.345089 containerd[1473]: time="2026-04-17T23:28:07.344980797Z" level=error msg="encountered an error cleaning up failed sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.345336 containerd[1473]: time="2026-04-17T23:28:07.345118055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7649d4fc56-khl48,Uid:6cae17b8-aaee-4419-851b-914e2d779768,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.347863 kubelet[2519]: E0417 23:28:07.347666 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.347863 kubelet[2519]: E0417 23:28:07.347795 2519 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7649d4fc56-khl48" Apr 17 23:28:07.347863 kubelet[2519]: E0417 23:28:07.347823 2519 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7649d4fc56-khl48" Apr 17 23:28:07.348334 kubelet[2519]: E0417 23:28:07.348129 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7649d4fc56-khl48_calico-system(6cae17b8-aaee-4419-851b-914e2d779768)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7649d4fc56-khl48_calico-system(6cae17b8-aaee-4419-851b-914e2d779768)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7649d4fc56-khl48" podUID="6cae17b8-aaee-4419-851b-914e2d779768" Apr 17 23:28:07.395247 systemd[1]: Started cri-containerd-567adcb3aadb1a7cdef8f9c301989ab13ea01f017bc6a94b977a54adb5686214.scope - libcontainer container 567adcb3aadb1a7cdef8f9c301989ab13ea01f017bc6a94b977a54adb5686214. Apr 17 23:28:07.449751 containerd[1473]: time="2026-04-17T23:28:07.449583111Z" level=error msg="Failed to destroy network for sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.450278 containerd[1473]: time="2026-04-17T23:28:07.450132105Z" level=error msg="encountered an error cleaning up failed sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.450278 containerd[1473]: time="2026-04-17T23:28:07.450188065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585fd75c68-2fwmw,Uid:c4151f14-8e49-44f5-8af3-2ec8c78a0e0e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.450445 kubelet[2519]: E0417 23:28:07.450400 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.450488 kubelet[2519]: E0417 23:28:07.450459 2519 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-585fd75c68-2fwmw" Apr 17 23:28:07.450488 kubelet[2519]: E0417 23:28:07.450478 2519 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-585fd75c68-2fwmw" Apr 17 23:28:07.450575 kubelet[2519]: E0417 23:28:07.450523 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-585fd75c68-2fwmw_calico-system(c4151f14-8e49-44f5-8af3-2ec8c78a0e0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-585fd75c68-2fwmw_calico-system(c4151f14-8e49-44f5-8af3-2ec8c78a0e0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-585fd75c68-2fwmw" podUID="c4151f14-8e49-44f5-8af3-2ec8c78a0e0e" Apr 17 23:28:07.472753 containerd[1473]: time="2026-04-17T23:28:07.472706418Z" level=info msg="StartContainer for \"567adcb3aadb1a7cdef8f9c301989ab13ea01f017bc6a94b977a54adb5686214\" returns successfully" Apr 17 23:28:07.486433 containerd[1473]: time="2026-04-17T23:28:07.486388593Z" level=error msg="Failed to destroy network for sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.487801 containerd[1473]: time="2026-04-17T23:28:07.487636048Z" level=error msg="Failed to destroy network for sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.489154 containerd[1473]: time="2026-04-17T23:28:07.489074439Z" level=error msg="encountered an error cleaning up failed sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.489303 containerd[1473]: time="2026-04-17T23:28:07.489273782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7878s,Uid:671ffde3-921c-4e59-9f78-76ef9c5efeb2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.489630 containerd[1473]: time="2026-04-17T23:28:07.489591130Z" level=error msg="encountered an error cleaning up failed sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.489680 containerd[1473]: time="2026-04-17T23:28:07.489643768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8ddd65ff-cgvqb,Uid:f88cc04e-7a14-485a-8daa-e700592ac36b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.492335 kubelet[2519]: E0417 23:28:07.489597 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.492335 kubelet[2519]: E0417 23:28:07.491119 2519 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-7878s" Apr 17 23:28:07.492335 kubelet[2519]: E0417 23:28:07.491145 2519 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-7878s" Apr 17 23:28:07.492500 kubelet[2519]: E0417 23:28:07.492197 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-7878s_kube-system(671ffde3-921c-4e59-9f78-76ef9c5efeb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-7878s_kube-system(671ffde3-921c-4e59-9f78-76ef9c5efeb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-7878s" podUID="671ffde3-921c-4e59-9f78-76ef9c5efeb2" Apr 17 23:28:07.492759 kubelet[2519]: E0417 23:28:07.492728 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.492921 kubelet[2519]: E0417 23:28:07.492771 2519 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f8ddd65ff-cgvqb" Apr 17 23:28:07.492921 kubelet[2519]: E0417 23:28:07.492787 2519 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f8ddd65ff-cgvqb" Apr 17 23:28:07.492921 kubelet[2519]: E0417 23:28:07.492826 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8ddd65ff-cgvqb_calico-system(f88cc04e-7a14-485a-8daa-e700592ac36b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8ddd65ff-cgvqb_calico-system(f88cc04e-7a14-485a-8daa-e700592ac36b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f8ddd65ff-cgvqb" podUID="f88cc04e-7a14-485a-8daa-e700592ac36b" Apr 17 23:28:07.509932 containerd[1473]: time="2026-04-17T23:28:07.509887689Z" level=error msg="Failed to destroy network for sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.510594 containerd[1473]: time="2026-04-17T23:28:07.510477032Z" level=error msg="encountered an error cleaning up failed sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.510594 containerd[1473]: time="2026-04-17T23:28:07.510540678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ptlpv,Uid:4bad2253-83ea-4317-8f1d-4aea885a0488,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.511096 kubelet[2519]: E0417 23:28:07.510921 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.511096 kubelet[2519]: E0417 23:28:07.510971 2519 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-ptlpv" Apr 17 23:28:07.511096 kubelet[2519]: E0417 23:28:07.510989 2519 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-ptlpv" Apr 17 23:28:07.512253 kubelet[2519]: E0417 23:28:07.511052 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-ptlpv_calico-system(4bad2253-83ea-4317-8f1d-4aea885a0488)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-ptlpv_calico-system(4bad2253-83ea-4317-8f1d-4aea885a0488)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-ptlpv" podUID="4bad2253-83ea-4317-8f1d-4aea885a0488" Apr 17 23:28:07.517382 containerd[1473]: time="2026-04-17T23:28:07.516494749Z" level=error msg="Failed to destroy network for sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.519397 containerd[1473]: time="2026-04-17T23:28:07.519147091Z" level=error msg="encountered an error cleaning up failed sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.519397 containerd[1473]: time="2026-04-17T23:28:07.519220464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6z49m,Uid:3c075d03-ef84-4fae-a46f-26a1d184ca06,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.519750 kubelet[2519]: E0417 23:28:07.519529 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.519750 kubelet[2519]: E0417 23:28:07.519687 2519 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-6z49m" Apr 17 23:28:07.520030 kubelet[2519]: E0417 23:28:07.519820 2519 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-6z49m" Apr 17 23:28:07.520221 kubelet[2519]: E0417 23:28:07.520103 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-6z49m_kube-system(3c075d03-ef84-4fae-a46f-26a1d184ca06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-6z49m_kube-system(3c075d03-ef84-4fae-a46f-26a1d184ca06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-6z49m" podUID="3c075d03-ef84-4fae-a46f-26a1d184ca06" Apr 17 23:28:07.530131 containerd[1473]: time="2026-04-17T23:28:07.529300975Z" level=error msg="Failed to destroy network for sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.530931 containerd[1473]: time="2026-04-17T23:28:07.530644779Z" level=error msg="encountered an error cleaning up failed sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.530931 containerd[1473]: time="2026-04-17T23:28:07.530698978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8ddd65ff-8w2zs,Uid:ed934d57-c244-432a-8985-874dc75eb161,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.531136 kubelet[2519]: E0417 23:28:07.530957 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:28:07.531136 kubelet[2519]: E0417 23:28:07.531013 2519 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f8ddd65ff-8w2zs" Apr 17 23:28:07.531136 kubelet[2519]: E0417 23:28:07.531030 2519 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f8ddd65ff-8w2zs" Apr 17 23:28:07.531313 kubelet[2519]: E0417 23:28:07.531259 2519 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8ddd65ff-8w2zs_calico-system(ed934d57-c244-432a-8985-874dc75eb161)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8ddd65ff-8w2zs_calico-system(ed934d57-c244-432a-8985-874dc75eb161)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f8ddd65ff-8w2zs" podUID="ed934d57-c244-432a-8985-874dc75eb161" Apr 17 23:28:08.089919 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b-shm.mount: Deactivated successfully. Apr 17 23:28:08.179470 kubelet[2519]: I0417 23:28:08.179433 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:08.180325 containerd[1473]: time="2026-04-17T23:28:08.180279202Z" level=info msg="StopPodSandbox for \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\"" Apr 17 23:28:08.180617 containerd[1473]: time="2026-04-17T23:28:08.180452111Z" level=info msg="Ensure that sandbox 5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3 in task-service has been cleanup successfully" Apr 17 23:28:08.191796 kubelet[2519]: I0417 23:28:08.191435 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:08.195979 containerd[1473]: time="2026-04-17T23:28:08.194190295Z" level=info msg="StopPodSandbox for \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\"" Apr 17 23:28:08.198637 kubelet[2519]: I0417 23:28:08.198426 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:08.199010 containerd[1473]: time="2026-04-17T23:28:08.198858065Z" level=info msg="Ensure that sandbox 254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8 in task-service has been cleanup successfully" Apr 17 23:28:08.201266 containerd[1473]: time="2026-04-17T23:28:08.201228393Z" level=info msg="StopPodSandbox for \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\"" Apr 17 23:28:08.203024 containerd[1473]: time="2026-04-17T23:28:08.201501924Z" level=info msg="Ensure that sandbox 570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160 in task-service has been cleanup successfully" Apr 17 23:28:08.207839 kubelet[2519]: I0417 23:28:08.207803 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:08.209277 containerd[1473]: time="2026-04-17T23:28:08.209236019Z" level=info msg="StopPodSandbox for \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\"" Apr 17 23:28:08.210076 containerd[1473]: time="2026-04-17T23:28:08.209523800Z" level=info msg="Ensure that sandbox 9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb in task-service has been cleanup successfully" Apr 17 23:28:08.213173 kubelet[2519]: I0417 23:28:08.213136 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:08.214476 containerd[1473]: time="2026-04-17T23:28:08.213614248Z" level=info msg="StopPodSandbox for \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\"" Apr 17 23:28:08.214476 containerd[1473]: time="2026-04-17T23:28:08.213808049Z" level=info msg="Ensure that sandbox 562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e in task-service has been cleanup successfully" Apr 17 23:28:08.218990 kubelet[2519]: I0417 23:28:08.218960 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:08.221801 containerd[1473]: time="2026-04-17T23:28:08.221228547Z" level=info msg="StopPodSandbox for \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\"" Apr 17 23:28:08.221801 containerd[1473]: time="2026-04-17T23:28:08.221466897Z" level=info msg="Ensure that sandbox a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf in task-service has been cleanup successfully" Apr 17 23:28:08.229728 kubelet[2519]: I0417 23:28:08.229650 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-v75dm" podStartSLOduration=1.940488651 podStartE2EDuration="20.229573986s" podCreationTimestamp="2026-04-17 23:27:48 +0000 UTC" firstStartedPulling="2026-04-17 23:27:48.877797507 +0000 UTC m=+23.033619948" lastFinishedPulling="2026-04-17 23:28:07.166882802 +0000 UTC m=+41.322705283" observedRunningTime="2026-04-17 23:28:08.224579891 +0000 UTC m=+42.380402372" watchObservedRunningTime="2026-04-17 23:28:08.229573986 +0000 UTC m=+42.385396467" Apr 17 23:28:08.234143 kubelet[2519]: I0417 23:28:08.234006 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:08.236881 containerd[1473]: time="2026-04-17T23:28:08.236702180Z" level=info msg="StopPodSandbox for \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\"" Apr 17 23:28:08.237462 containerd[1473]: time="2026-04-17T23:28:08.237435961Z" level=info msg="Ensure that sandbox 28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f in task-service has been cleanup successfully" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.457 [INFO][3764] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.457 [INFO][3764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" iface="eth0" netns="/var/run/netns/cni-f4cb4985-c254-ec6b-b7a5-99498c503ce2" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.457 [INFO][3764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" iface="eth0" netns="/var/run/netns/cni-f4cb4985-c254-ec6b-b7a5-99498c503ce2" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.465 [INFO][3764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" iface="eth0" netns="/var/run/netns/cni-f4cb4985-c254-ec6b-b7a5-99498c503ce2" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.465 [INFO][3764] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.465 [INFO][3764] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.546 [INFO][3826] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" HandleID="k8s-pod-network.28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.546 [INFO][3826] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.547 [INFO][3826] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.564 [WARNING][3826] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" HandleID="k8s-pod-network.28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.564 [INFO][3826] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" HandleID="k8s-pod-network.28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.566 [INFO][3826] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:08.587387 containerd[1473]: 2026-04-17 23:28:08.574 [INFO][3764] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:08.590978 containerd[1473]: time="2026-04-17T23:28:08.590765633Z" level=info msg="TearDown network for sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\" successfully" Apr 17 23:28:08.590978 containerd[1473]: time="2026-04-17T23:28:08.590803697Z" level=info msg="StopPodSandbox for \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\" returns successfully" Apr 17 23:28:08.589871 systemd[1]: run-netns-cni\x2df4cb4985\x2dc254\x2dec6b\x2db7a5\x2d99498c503ce2.mount: Deactivated successfully. Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.423 [INFO][3751] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.424 [INFO][3751] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" iface="eth0" netns="/var/run/netns/cni-d28a6979-ddca-e026-9f1c-e86f2fa2953d" Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.424 [INFO][3751] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" iface="eth0" netns="/var/run/netns/cni-d28a6979-ddca-e026-9f1c-e86f2fa2953d" Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.425 [INFO][3751] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" iface="eth0" netns="/var/run/netns/cni-d28a6979-ddca-e026-9f1c-e86f2fa2953d" Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.425 [INFO][3751] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.425 [INFO][3751] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.554 [INFO][3798] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" HandleID="k8s-pod-network.9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.554 [INFO][3798] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.568 [INFO][3798] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.587 [WARNING][3798] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" HandleID="k8s-pod-network.9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.587 [INFO][3798] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" HandleID="k8s-pod-network.9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.593 [INFO][3798] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:08.622736 containerd[1473]: 2026-04-17 23:28:08.608 [INFO][3751] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:08.625148 systemd[1]: run-netns-cni\x2dd28a6979\x2dddca\x2de026\x2d9f1c\x2de86f2fa2953d.mount: Deactivated successfully. Apr 17 23:28:08.628198 containerd[1473]: time="2026-04-17T23:28:08.628161707Z" level=info msg="TearDown network for sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\" successfully" Apr 17 23:28:08.629980 containerd[1473]: time="2026-04-17T23:28:08.629930057Z" level=info msg="StopPodSandbox for \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\" returns successfully" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.432 [INFO][3679] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.433 [INFO][3679] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" iface="eth0" netns="/var/run/netns/cni-e2f816f8-2aaa-f1b0-3ab3-1d702a7d1a34" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.434 [INFO][3679] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" iface="eth0" netns="/var/run/netns/cni-e2f816f8-2aaa-f1b0-3ab3-1d702a7d1a34" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.435 [INFO][3679] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" iface="eth0" netns="/var/run/netns/cni-e2f816f8-2aaa-f1b0-3ab3-1d702a7d1a34" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.435 [INFO][3679] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.435 [INFO][3679] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.585 [INFO][3808] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" HandleID="k8s-pod-network.5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.586 [INFO][3808] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.598 [INFO][3808] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.615 [WARNING][3808] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" HandleID="k8s-pod-network.5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.615 [INFO][3808] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" HandleID="k8s-pod-network.5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.618 [INFO][3808] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:08.630181 containerd[1473]: 2026-04-17 23:28:08.625 [INFO][3679] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:08.630816 containerd[1473]: time="2026-04-17T23:28:08.630787835Z" level=info msg="TearDown network for sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\" successfully" Apr 17 23:28:08.630905 containerd[1473]: time="2026-04-17T23:28:08.630889099Z" level=info msg="StopPodSandbox for \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\" returns successfully" Apr 17 23:28:08.634983 systemd[1]: run-netns-cni\x2de2f816f8\x2d2aaa\x2df1b0\x2d3ab3\x2d1d702a7d1a34.mount: Deactivated successfully. Apr 17 23:28:08.639206 containerd[1473]: time="2026-04-17T23:28:08.638911455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6z49m,Uid:3c075d03-ef84-4fae-a46f-26a1d184ca06,Namespace:kube-system,Attempt:1,}" Apr 17 23:28:08.640359 containerd[1473]: time="2026-04-17T23:28:08.640173287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7878s,Uid:671ffde3-921c-4e59-9f78-76ef9c5efeb2,Namespace:kube-system,Attempt:1,}" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.430 [INFO][3719] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.431 [INFO][3719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" iface="eth0" netns="/var/run/netns/cni-e4d2cd8e-2c90-43b5-2215-446877d5e1ce" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.432 [INFO][3719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" iface="eth0" netns="/var/run/netns/cni-e4d2cd8e-2c90-43b5-2215-446877d5e1ce" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.433 [INFO][3719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" iface="eth0" netns="/var/run/netns/cni-e4d2cd8e-2c90-43b5-2215-446877d5e1ce" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.433 [INFO][3719] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.433 [INFO][3719] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.617 [INFO][3806] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" HandleID="k8s-pod-network.254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.618 [INFO][3806] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.618 [INFO][3806] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.640 [WARNING][3806] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" HandleID="k8s-pod-network.254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.640 [INFO][3806] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" HandleID="k8s-pod-network.254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.644 [INFO][3806] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:08.649017 containerd[1473]: 2026-04-17 23:28:08.647 [INFO][3719] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:08.651409 containerd[1473]: time="2026-04-17T23:28:08.651195366Z" level=info msg="TearDown network for sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\" successfully" Apr 17 23:28:08.651409 containerd[1473]: time="2026-04-17T23:28:08.651254243Z" level=info msg="StopPodSandbox for \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\" returns successfully" Apr 17 23:28:08.655684 containerd[1473]: time="2026-04-17T23:28:08.655647801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8ddd65ff-8w2zs,Uid:ed934d57-c244-432a-8985-874dc75eb161,Namespace:calico-system,Attempt:1,}" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.425 [INFO][3715] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.425 [INFO][3715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" iface="eth0" netns="/var/run/netns/cni-79307d16-3748-b6a1-1514-f35f716fafc1" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.425 [INFO][3715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" iface="eth0" netns="/var/run/netns/cni-79307d16-3748-b6a1-1514-f35f716fafc1" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.426 [INFO][3715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" iface="eth0" netns="/var/run/netns/cni-79307d16-3748-b6a1-1514-f35f716fafc1" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.426 [INFO][3715] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.426 [INFO][3715] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.644 [INFO][3800] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" HandleID="k8s-pod-network.570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.644 [INFO][3800] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.646 [INFO][3800] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.661 [WARNING][3800] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" HandleID="k8s-pod-network.570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.661 [INFO][3800] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" HandleID="k8s-pod-network.570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.663 [INFO][3800] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:08.669750 containerd[1473]: 2026-04-17 23:28:08.666 [INFO][3715] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:08.671720 containerd[1473]: time="2026-04-17T23:28:08.669891181Z" level=info msg="TearDown network for sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\" successfully" Apr 17 23:28:08.671720 containerd[1473]: time="2026-04-17T23:28:08.669918479Z" level=info msg="StopPodSandbox for \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\" returns successfully" Apr 17 23:28:08.673625 containerd[1473]: time="2026-04-17T23:28:08.673587021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ptlpv,Uid:4bad2253-83ea-4317-8f1d-4aea885a0488,Namespace:calico-system,Attempt:1,}" Apr 17 23:28:08.685987 kubelet[2519]: I0417 23:28:08.685944 2519 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-backend-key-pair\") pod \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\" (UID: \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\") " Apr 17 23:28:08.685987 kubelet[2519]: I0417 23:28:08.685995 2519 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-nginx-config\" (UniqueName: \"kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-nginx-config\") pod \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\" (UID: \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\") " Apr 17 23:28:08.686576 kubelet[2519]: I0417 23:28:08.686016 2519 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-ca-bundle\") pod \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\" (UID: \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\") " Apr 17 23:28:08.686576 kubelet[2519]: I0417 23:28:08.686047 2519 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-kube-api-access-ft4xv\" (UniqueName: \"kubernetes.io/projected/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-kube-api-access-ft4xv\") pod \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\" (UID: \"c4151f14-8e49-44f5-8af3-2ec8c78a0e0e\") " Apr 17 23:28:08.688602 kubelet[2519]: I0417 23:28:08.688509 2519 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-ca-bundle" pod "c4151f14-8e49-44f5-8af3-2ec8c78a0e0e" (UID: "c4151f14-8e49-44f5-8af3-2ec8c78a0e0e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:28:08.691427 kubelet[2519]: I0417 23:28:08.690837 2519 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-nginx-config" pod "c4151f14-8e49-44f5-8af3-2ec8c78a0e0e" (UID: "c4151f14-8e49-44f5-8af3-2ec8c78a0e0e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:28:08.697121 kubelet[2519]: I0417 23:28:08.694421 2519 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-backend-key-pair" pod "c4151f14-8e49-44f5-8af3-2ec8c78a0e0e" (UID: "c4151f14-8e49-44f5-8af3-2ec8c78a0e0e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:28:08.699697 kubelet[2519]: I0417 23:28:08.699655 2519 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-kube-api-access-ft4xv" pod "c4151f14-8e49-44f5-8af3-2ec8c78a0e0e" (UID: "c4151f14-8e49-44f5-8af3-2ec8c78a0e0e"). InnerVolumeSpecName "kube-api-access-ft4xv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.449 [INFO][3741] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.452 [INFO][3741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" iface="eth0" netns="/var/run/netns/cni-87864644-7e4d-de8e-ff66-6d35ea4f1d1a" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.453 [INFO][3741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" iface="eth0" netns="/var/run/netns/cni-87864644-7e4d-de8e-ff66-6d35ea4f1d1a" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.454 [INFO][3741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" iface="eth0" netns="/var/run/netns/cni-87864644-7e4d-de8e-ff66-6d35ea4f1d1a" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.454 [INFO][3741] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.454 [INFO][3741] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.643 [INFO][3820] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" HandleID="k8s-pod-network.562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.644 [INFO][3820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.663 [INFO][3820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.679 [WARNING][3820] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" HandleID="k8s-pod-network.562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.679 [INFO][3820] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" HandleID="k8s-pod-network.562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.683 [INFO][3820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:08.700424 containerd[1473]: 2026-04-17 23:28:08.694 [INFO][3741] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:08.701298 containerd[1473]: time="2026-04-17T23:28:08.701267637Z" level=info msg="TearDown network for sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\" successfully" Apr 17 23:28:08.701397 containerd[1473]: time="2026-04-17T23:28:08.701382989Z" level=info msg="StopPodSandbox for \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\" returns successfully" Apr 17 23:28:08.708264 containerd[1473]: time="2026-04-17T23:28:08.708223844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8ddd65ff-cgvqb,Uid:f88cc04e-7a14-485a-8daa-e700592ac36b,Namespace:calico-system,Attempt:1,}" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.452 [INFO][3737] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.455 [INFO][3737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" iface="eth0" netns="/var/run/netns/cni-890ceb8a-fa85-7f47-cada-34a52b40a000" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.455 [INFO][3737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" iface="eth0" netns="/var/run/netns/cni-890ceb8a-fa85-7f47-cada-34a52b40a000" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.456 [INFO][3737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" iface="eth0" netns="/var/run/netns/cni-890ceb8a-fa85-7f47-cada-34a52b40a000" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.456 [INFO][3737] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.456 [INFO][3737] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.652 [INFO][3821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" HandleID="k8s-pod-network.a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.652 [INFO][3821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.683 [INFO][3821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.707 [WARNING][3821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" HandleID="k8s-pod-network.a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.707 [INFO][3821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" HandleID="k8s-pod-network.a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.714 [INFO][3821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:08.730192 containerd[1473]: 2026-04-17 23:28:08.725 [INFO][3737] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:08.731620 containerd[1473]: time="2026-04-17T23:28:08.730705556Z" level=info msg="TearDown network for sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\" successfully" Apr 17 23:28:08.731620 containerd[1473]: time="2026-04-17T23:28:08.730836638Z" level=info msg="StopPodSandbox for \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\" returns successfully" Apr 17 23:28:08.735027 containerd[1473]: time="2026-04-17T23:28:08.734986603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7649d4fc56-khl48,Uid:6cae17b8-aaee-4419-851b-914e2d779768,Namespace:calico-system,Attempt:1,}" Apr 17 23:28:08.787123 kubelet[2519]: I0417 23:28:08.787018 2519 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-ddb46eeabf\" DevicePath \"\"" Apr 17 23:28:08.787123 kubelet[2519]: I0417 23:28:08.787076 2519 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-nginx-config\") on node \"ci-4081-3-6-n-ddb46eeabf\" DevicePath \"\"" Apr 17 23:28:08.787123 kubelet[2519]: I0417 23:28:08.787087 2519 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-whisker-ca-bundle\") on node \"ci-4081-3-6-n-ddb46eeabf\" DevicePath \"\"" Apr 17 23:28:08.787123 kubelet[2519]: I0417 23:28:08.787097 2519 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ft4xv\" (UniqueName: \"kubernetes.io/projected/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e-kube-api-access-ft4xv\") on node \"ci-4081-3-6-n-ddb46eeabf\" DevicePath \"\"" Apr 17 23:28:09.041418 systemd-networkd[1366]: cali9a16fa50a0b: Link UP Apr 17 23:28:09.043259 systemd-networkd[1366]: cali9a16fa50a0b: Gained carrier Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.775 [ERROR][3860] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.811 [INFO][3860] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0 calico-apiserver-6f8ddd65ff- calico-system ed934d57-c244-432a-8985-874dc75eb161 896 0 2026-04-17 23:27:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8ddd65ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-ddb46eeabf calico-apiserver-6f8ddd65ff-8w2zs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9a16fa50a0b [] [] }} ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-8w2zs" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.811 [INFO][3860] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-8w2zs" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.864 [INFO][3923] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" HandleID="k8s-pod-network.f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.894 [INFO][3923] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" HandleID="k8s-pod-network.f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-ddb46eeabf", "pod":"calico-apiserver-6f8ddd65ff-8w2zs", "timestamp":"2026-04-17 23:28:08.864873135 +0000 UTC"}, Hostname:"ci-4081-3-6-n-ddb46eeabf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.894 [INFO][3923] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.894 [INFO][3923] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.894 [INFO][3923] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-ddb46eeabf' Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.899 [INFO][3923] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.923 [INFO][3923] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.938 [INFO][3923] ipam/ipam.go 526: Trying affinity for 192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.947 [INFO][3923] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.952 [INFO][3923] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.952 [INFO][3923] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.967 [INFO][3923] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129 Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:08.979 [INFO][3923] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:09.005 [INFO][3923] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.1/26] block=192.168.52.0/26 handle="k8s-pod-network.f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:09.005 [INFO][3923] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.1/26] handle="k8s-pod-network.f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:09.005 [INFO][3923] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:09.079613 containerd[1473]: 2026-04-17 23:28:09.005 [INFO][3923] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.1/26] IPv6=[] ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" HandleID="k8s-pod-network.f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:09.082766 containerd[1473]: 2026-04-17 23:28:09.018 [INFO][3860] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-8w2zs" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0", GenerateName:"calico-apiserver-6f8ddd65ff-", Namespace:"calico-system", SelfLink:"", UID:"ed934d57-c244-432a-8985-874dc75eb161", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8ddd65ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"", Pod:"calico-apiserver-6f8ddd65ff-8w2zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a16fa50a0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.082766 containerd[1473]: 2026-04-17 23:28:09.018 [INFO][3860] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.1/32] ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-8w2zs" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:09.082766 containerd[1473]: 2026-04-17 23:28:09.018 [INFO][3860] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a16fa50a0b ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-8w2zs" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:09.082766 containerd[1473]: 2026-04-17 23:28:09.042 [INFO][3860] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-8w2zs" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:09.082766 containerd[1473]: 2026-04-17 23:28:09.047 [INFO][3860] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-8w2zs" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0", GenerateName:"calico-apiserver-6f8ddd65ff-", Namespace:"calico-system", SelfLink:"", UID:"ed934d57-c244-432a-8985-874dc75eb161", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8ddd65ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129", Pod:"calico-apiserver-6f8ddd65ff-8w2zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a16fa50a0b", MAC:"fa:59:e8:ca:7d:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.082766 containerd[1473]: 2026-04-17 23:28:09.073 [INFO][3860] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-8w2zs" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:09.095453 systemd[1]: run-netns-cni\x2de4d2cd8e\x2d2c90\x2d43b5\x2d2215\x2d446877d5e1ce.mount: Deactivated successfully. Apr 17 23:28:09.095575 systemd[1]: run-netns-cni\x2d79307d16\x2d3748\x2db6a1\x2d1514\x2df35f716fafc1.mount: Deactivated successfully. Apr 17 23:28:09.095628 systemd[1]: run-netns-cni\x2d87864644\x2d7e4d\x2dde8e\x2dff66\x2d6d35ea4f1d1a.mount: Deactivated successfully. Apr 17 23:28:09.095674 systemd[1]: run-netns-cni\x2d890ceb8a\x2dfa85\x2d7f47\x2dcada\x2d34a52b40a000.mount: Deactivated successfully. Apr 17 23:28:09.095721 systemd[1]: var-lib-kubelet-pods-c4151f14\x2d8e49\x2d44f5\x2d8af3\x2d2ec8c78a0e0e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dft4xv.mount: Deactivated successfully. Apr 17 23:28:09.095776 systemd[1]: var-lib-kubelet-pods-c4151f14\x2d8e49\x2d44f5\x2d8af3\x2d2ec8c78a0e0e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:28:09.140469 containerd[1473]: time="2026-04-17T23:28:09.139630182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:09.140469 containerd[1473]: time="2026-04-17T23:28:09.139698779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:09.140469 containerd[1473]: time="2026-04-17T23:28:09.139721232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.140469 containerd[1473]: time="2026-04-17T23:28:09.139821687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.174273 systemd-networkd[1366]: caliad01d59865d: Link UP Apr 17 23:28:09.182925 systemd-networkd[1366]: caliad01d59865d: Gained carrier Apr 17 23:28:09.205386 systemd[1]: Started cri-containerd-f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129.scope - libcontainer container f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129. Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:08.742 [ERROR][3847] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:08.776 [INFO][3847] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0 coredns-7d764666f9- kube-system 3c075d03-ef84-4fae-a46f-26a1d184ca06 895 0 2026-04-17 23:27:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-ddb46eeabf coredns-7d764666f9-6z49m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad01d59865d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Namespace="kube-system" Pod="coredns-7d764666f9-6z49m" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:08.776 [INFO][3847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Namespace="kube-system" Pod="coredns-7d764666f9-6z49m" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:08.944 [INFO][3915] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" HandleID="k8s-pod-network.204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:08.969 [INFO][3915] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" HandleID="k8s-pod-network.204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000381dd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-ddb46eeabf", "pod":"coredns-7d764666f9-6z49m", "timestamp":"2026-04-17 23:28:08.943999284 +0000 UTC"}, Hostname:"ci-4081-3-6-n-ddb46eeabf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000374000)} Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:08.969 [INFO][3915] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.005 [INFO][3915] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.005 [INFO][3915] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-ddb46eeabf' Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.011 [INFO][3915] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.023 [INFO][3915] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.046 [INFO][3915] ipam/ipam.go 526: Trying affinity for 192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.061 [INFO][3915] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.071 [INFO][3915] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.073 [INFO][3915] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.085 [INFO][3915] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.107 [INFO][3915] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.124 [INFO][3915] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.2/26] block=192.168.52.0/26 handle="k8s-pod-network.204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.124 [INFO][3915] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.2/26] handle="k8s-pod-network.204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.124 [INFO][3915] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:09.220252 containerd[1473]: 2026-04-17 23:28:09.124 [INFO][3915] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.2/26] IPv6=[] ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" HandleID="k8s-pod-network.204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:09.221702 containerd[1473]: 2026-04-17 23:28:09.128 [INFO][3847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Namespace="kube-system" Pod="coredns-7d764666f9-6z49m" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3c075d03-ef84-4fae-a46f-26a1d184ca06", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"", Pod:"coredns-7d764666f9-6z49m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad01d59865d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.221702 containerd[1473]: 2026-04-17 23:28:09.129 [INFO][3847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.2/32] ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Namespace="kube-system" Pod="coredns-7d764666f9-6z49m" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:09.221702 containerd[1473]: 2026-04-17 23:28:09.131 [INFO][3847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad01d59865d ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Namespace="kube-system" Pod="coredns-7d764666f9-6z49m" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:09.221702 containerd[1473]: 2026-04-17 23:28:09.189 [INFO][3847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Namespace="kube-system" Pod="coredns-7d764666f9-6z49m" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:09.221702 containerd[1473]: 2026-04-17 23:28:09.193 [INFO][3847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Namespace="kube-system" Pod="coredns-7d764666f9-6z49m" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3c075d03-ef84-4fae-a46f-26a1d184ca06", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e", Pod:"coredns-7d764666f9-6z49m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad01d59865d", MAC:"be:62:ea:15:2b:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.222918 containerd[1473]: 2026-04-17 23:28:09.215 [INFO][3847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e" Namespace="kube-system" Pod="coredns-7d764666f9-6z49m" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:09.248675 systemd[1]: Removed slice kubepods-besteffort-podc4151f14_8e49_44f5_8af3_2ec8c78a0e0e.slice - libcontainer container kubepods-besteffort-podc4151f14_8e49_44f5_8af3_2ec8c78a0e0e.slice. Apr 17 23:28:09.276378 systemd-networkd[1366]: cali6f7605506b5: Link UP Apr 17 23:28:09.292604 systemd-networkd[1366]: cali6f7605506b5: Gained carrier Apr 17 23:28:09.305483 containerd[1473]: time="2026-04-17T23:28:09.305078582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:09.307144 containerd[1473]: time="2026-04-17T23:28:09.305772683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:09.307144 containerd[1473]: time="2026-04-17T23:28:09.305977035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.307144 containerd[1473]: time="2026-04-17T23:28:09.306886295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.319000 systemd[1]: run-containerd-runc-k8s.io-567adcb3aadb1a7cdef8f9c301989ab13ea01f017bc6a94b977a54adb5686214-runc.QhuNqU.mount: Deactivated successfully. Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:08.879 [ERROR][3881] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:08.918 [INFO][3881] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0 calico-apiserver-6f8ddd65ff- calico-system f88cc04e-7a14-485a-8daa-e700592ac36b 898 0 2026-04-17 23:27:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8ddd65ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-ddb46eeabf calico-apiserver-6f8ddd65ff-cgvqb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6f7605506b5 [] [] }} ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-cgvqb" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:08.918 [INFO][3881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-cgvqb" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:08.987 [INFO][3972] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" HandleID="k8s-pod-network.ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.007 [INFO][3972] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" HandleID="k8s-pod-network.ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000399f20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-ddb46eeabf", "pod":"calico-apiserver-6f8ddd65ff-cgvqb", "timestamp":"2026-04-17 23:28:08.987508756 +0000 UTC"}, Hostname:"ci-4081-3-6-n-ddb46eeabf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002d4000)} Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.008 [INFO][3972] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.125 [INFO][3972] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.125 [INFO][3972] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-ddb46eeabf' Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.132 [INFO][3972] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.193 [INFO][3972] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.203 [INFO][3972] ipam/ipam.go 526: Trying affinity for 192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.207 [INFO][3972] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.213 [INFO][3972] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.214 [INFO][3972] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.221 [INFO][3972] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2 Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.229 [INFO][3972] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.236 [INFO][3972] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.3/26] block=192.168.52.0/26 handle="k8s-pod-network.ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.236 [INFO][3972] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.3/26] handle="k8s-pod-network.ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.236 [INFO][3972] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:09.352884 containerd[1473]: 2026-04-17 23:28:09.236 [INFO][3972] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.3/26] IPv6=[] ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" HandleID="k8s-pod-network.ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:09.353532 containerd[1473]: 2026-04-17 23:28:09.256 [INFO][3881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-cgvqb" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0", GenerateName:"calico-apiserver-6f8ddd65ff-", Namespace:"calico-system", SelfLink:"", UID:"f88cc04e-7a14-485a-8daa-e700592ac36b", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8ddd65ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"", Pod:"calico-apiserver-6f8ddd65ff-cgvqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6f7605506b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.353532 containerd[1473]: 2026-04-17 23:28:09.256 [INFO][3881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.3/32] ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-cgvqb" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:09.353532 containerd[1473]: 2026-04-17 23:28:09.256 [INFO][3881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f7605506b5 ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-cgvqb" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:09.353532 containerd[1473]: 2026-04-17 23:28:09.296 [INFO][3881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-cgvqb" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:09.353532 containerd[1473]: 2026-04-17 23:28:09.297 [INFO][3881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-cgvqb" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0", GenerateName:"calico-apiserver-6f8ddd65ff-", Namespace:"calico-system", SelfLink:"", UID:"f88cc04e-7a14-485a-8daa-e700592ac36b", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8ddd65ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2", Pod:"calico-apiserver-6f8ddd65ff-cgvqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6f7605506b5", MAC:"e2:ae:fb:80:8d:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.353532 containerd[1473]: 2026-04-17 23:28:09.345 [INFO][3881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2" Namespace="calico-system" Pod="calico-apiserver-6f8ddd65ff-cgvqb" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:09.353173 systemd[1]: Started cri-containerd-204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e.scope - libcontainer container 204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e. Apr 17 23:28:09.387112 systemd[1]: Created slice kubepods-besteffort-pod970a4fba_59d8_4f44_827c_ca2df599dacf.slice - libcontainer container kubepods-besteffort-pod970a4fba_59d8_4f44_827c_ca2df599dacf.slice. Apr 17 23:28:09.389433 containerd[1473]: time="2026-04-17T23:28:09.386632339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:09.389433 containerd[1473]: time="2026-04-17T23:28:09.386692252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:09.389433 containerd[1473]: time="2026-04-17T23:28:09.386703658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.389433 containerd[1473]: time="2026-04-17T23:28:09.386782982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.414890 systemd[1]: Started cri-containerd-ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2.scope - libcontainer container ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2. Apr 17 23:28:09.443080 containerd[1473]: time="2026-04-17T23:28:09.443016911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6z49m,Uid:3c075d03-ef84-4fae-a46f-26a1d184ca06,Namespace:kube-system,Attempt:1,} returns sandbox id \"204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e\"" Apr 17 23:28:09.450645 containerd[1473]: time="2026-04-17T23:28:09.450435906Z" level=info msg="CreateContainer within sandbox \"204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:28:09.495829 systemd-networkd[1366]: califaed04d94e0: Link UP Apr 17 23:28:09.501139 systemd-networkd[1366]: califaed04d94e0: Gained carrier Apr 17 23:28:09.504829 kubelet[2519]: I0417 23:28:09.504639 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/970a4fba-59d8-4f44-827c-ca2df599dacf-nginx-config\") pod \"whisker-86c9fc6889-rlp8h\" (UID: \"970a4fba-59d8-4f44-827c-ca2df599dacf\") " pod="calico-system/whisker-86c9fc6889-rlp8h" Apr 17 23:28:09.504829 kubelet[2519]: I0417 23:28:09.504682 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/970a4fba-59d8-4f44-827c-ca2df599dacf-whisker-backend-key-pair\") pod \"whisker-86c9fc6889-rlp8h\" (UID: \"970a4fba-59d8-4f44-827c-ca2df599dacf\") " pod="calico-system/whisker-86c9fc6889-rlp8h" Apr 17 23:28:09.504829 kubelet[2519]: I0417 23:28:09.504703 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/970a4fba-59d8-4f44-827c-ca2df599dacf-whisker-ca-bundle\") pod \"whisker-86c9fc6889-rlp8h\" (UID: \"970a4fba-59d8-4f44-827c-ca2df599dacf\") " pod="calico-system/whisker-86c9fc6889-rlp8h" Apr 17 23:28:09.504829 kubelet[2519]: I0417 23:28:09.504721 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fbhl\" (UniqueName: \"kubernetes.io/projected/970a4fba-59d8-4f44-827c-ca2df599dacf-kube-api-access-9fbhl\") pod \"whisker-86c9fc6889-rlp8h\" (UID: \"970a4fba-59d8-4f44-827c-ca2df599dacf\") " pod="calico-system/whisker-86c9fc6889-rlp8h" Apr 17 23:28:09.514154 containerd[1473]: time="2026-04-17T23:28:09.513137588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8ddd65ff-8w2zs,Uid:ed934d57-c244-432a-8985-874dc75eb161,Namespace:calico-system,Attempt:1,} returns sandbox id \"f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129\"" Apr 17 23:28:09.518556 containerd[1473]: time="2026-04-17T23:28:09.518261523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:28:09.520158 containerd[1473]: time="2026-04-17T23:28:09.519899142Z" level=info msg="CreateContainer within sandbox \"204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f659e72e55f88c830aa80065f5762bca60203be9ceb1b881f612766338b69980\"" Apr 17 23:28:09.523199 containerd[1473]: time="2026-04-17T23:28:09.522308106Z" level=info msg="StartContainer for \"f659e72e55f88c830aa80065f5762bca60203be9ceb1b881f612766338b69980\"" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:08.825 [ERROR][3894] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:08.848 [INFO][3894] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0 calico-kube-controllers-7649d4fc56- calico-system 6cae17b8-aaee-4419-851b-914e2d779768 897 0 2026-04-17 23:27:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7649d4fc56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-ddb46eeabf calico-kube-controllers-7649d4fc56-khl48 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califaed04d94e0 [] [] }} ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Namespace="calico-system" Pod="calico-kube-controllers-7649d4fc56-khl48" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:08.848 [INFO][3894] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Namespace="calico-system" Pod="calico-kube-controllers-7649d4fc56-khl48" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:08.995 [INFO][3933] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" HandleID="k8s-pod-network.720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.042 [INFO][3933] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" HandleID="k8s-pod-network.720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a1d40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-ddb46eeabf", "pod":"calico-kube-controllers-7649d4fc56-khl48", "timestamp":"2026-04-17 23:28:08.995131221 +0000 UTC"}, Hostname:"ci-4081-3-6-n-ddb46eeabf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400035b1e0)} Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.042 [INFO][3933] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.239 [INFO][3933] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.239 [INFO][3933] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-ddb46eeabf' Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.252 [INFO][3933] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.291 [INFO][3933] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.331 [INFO][3933] ipam/ipam.go 526: Trying affinity for 192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.364 [INFO][3933] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.405 [INFO][3933] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.405 [INFO][3933] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.417 [INFO][3933] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.436 [INFO][3933] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.467 [INFO][3933] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.4/26] block=192.168.52.0/26 handle="k8s-pod-network.720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.467 [INFO][3933] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.4/26] handle="k8s-pod-network.720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.467 [INFO][3933] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:09.562319 containerd[1473]: 2026-04-17 23:28:09.467 [INFO][3933] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.4/26] IPv6=[] ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" HandleID="k8s-pod-network.720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:09.562938 containerd[1473]: 2026-04-17 23:28:09.477 [INFO][3894] cni-plugin/k8s.go 418: Populated endpoint ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Namespace="calico-system" Pod="calico-kube-controllers-7649d4fc56-khl48" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0", GenerateName:"calico-kube-controllers-7649d4fc56-", Namespace:"calico-system", SelfLink:"", UID:"6cae17b8-aaee-4419-851b-914e2d779768", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7649d4fc56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"", Pod:"calico-kube-controllers-7649d4fc56-khl48", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califaed04d94e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.562938 containerd[1473]: 2026-04-17 23:28:09.478 [INFO][3894] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.4/32] ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Namespace="calico-system" Pod="calico-kube-controllers-7649d4fc56-khl48" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:09.562938 containerd[1473]: 2026-04-17 23:28:09.479 [INFO][3894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califaed04d94e0 ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Namespace="calico-system" Pod="calico-kube-controllers-7649d4fc56-khl48" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:09.562938 containerd[1473]: 2026-04-17 23:28:09.505 [INFO][3894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Namespace="calico-system" Pod="calico-kube-controllers-7649d4fc56-khl48" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:09.562938 containerd[1473]: 2026-04-17 23:28:09.507 [INFO][3894] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Namespace="calico-system" Pod="calico-kube-controllers-7649d4fc56-khl48" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0", GenerateName:"calico-kube-controllers-7649d4fc56-", Namespace:"calico-system", SelfLink:"", UID:"6cae17b8-aaee-4419-851b-914e2d779768", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7649d4fc56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e", Pod:"calico-kube-controllers-7649d4fc56-khl48", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califaed04d94e0", MAC:"da:35:1b:54:56:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.562938 containerd[1473]: 2026-04-17 23:28:09.536 [INFO][3894] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e" Namespace="calico-system" Pod="calico-kube-controllers-7649d4fc56-khl48" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:09.586914 systemd-networkd[1366]: cali44528935d38: Link UP Apr 17 23:28:09.587686 systemd-networkd[1366]: cali44528935d38: Gained carrier Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:08.828 [ERROR][3891] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:08.866 [INFO][3891] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0 coredns-7d764666f9- kube-system 671ffde3-921c-4e59-9f78-76ef9c5efeb2 894 0 2026-04-17 23:27:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-ddb46eeabf coredns-7d764666f9-7878s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali44528935d38 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Namespace="kube-system" Pod="coredns-7d764666f9-7878s" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:08.866 [INFO][3891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Namespace="kube-system" Pod="coredns-7d764666f9-7878s" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.022 [INFO][3939] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" HandleID="k8s-pod-network.f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.092 [INFO][3939] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" HandleID="k8s-pod-network.f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003919b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-ddb46eeabf", "pod":"coredns-7d764666f9-7878s", "timestamp":"2026-04-17 23:28:09.022515851 +0000 UTC"}, Hostname:"ci-4081-3-6-n-ddb46eeabf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000398420)} Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.092 [INFO][3939] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.469 [INFO][3939] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.469 [INFO][3939] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-ddb46eeabf' Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.474 [INFO][3939] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.490 [INFO][3939] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.505 [INFO][3939] ipam/ipam.go 526: Trying affinity for 192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.510 [INFO][3939] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.515 [INFO][3939] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.516 [INFO][3939] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.522 [INFO][3939] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3 Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.540 [INFO][3939] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.553 [INFO][3939] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.5/26] block=192.168.52.0/26 handle="k8s-pod-network.f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.554 [INFO][3939] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.5/26] handle="k8s-pod-network.f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.555 [INFO][3939] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:09.638045 containerd[1473]: 2026-04-17 23:28:09.556 [INFO][3939] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.5/26] IPv6=[] ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" HandleID="k8s-pod-network.f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:09.638697 containerd[1473]: 2026-04-17 23:28:09.564 [INFO][3891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Namespace="kube-system" Pod="coredns-7d764666f9-7878s" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"671ffde3-921c-4e59-9f78-76ef9c5efeb2", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"", Pod:"coredns-7d764666f9-7878s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44528935d38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.638697 containerd[1473]: 2026-04-17 23:28:09.566 [INFO][3891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.5/32] ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Namespace="kube-system" Pod="coredns-7d764666f9-7878s" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:09.638697 containerd[1473]: 2026-04-17 23:28:09.566 [INFO][3891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44528935d38 ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Namespace="kube-system" Pod="coredns-7d764666f9-7878s" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:09.638697 containerd[1473]: 2026-04-17 23:28:09.587 [INFO][3891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Namespace="kube-system" Pod="coredns-7d764666f9-7878s" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:09.638697 containerd[1473]: 2026-04-17 23:28:09.599 [INFO][3891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Namespace="kube-system" Pod="coredns-7d764666f9-7878s" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"671ffde3-921c-4e59-9f78-76ef9c5efeb2", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3", Pod:"coredns-7d764666f9-7878s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44528935d38", MAC:"2e:19:d7:48:1f:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.638913 containerd[1473]: 2026-04-17 23:28:09.615 [INFO][3891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3" Namespace="kube-system" Pod="coredns-7d764666f9-7878s" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:09.686315 systemd[1]: Started cri-containerd-f659e72e55f88c830aa80065f5762bca60203be9ceb1b881f612766338b69980.scope - libcontainer container f659e72e55f88c830aa80065f5762bca60203be9ceb1b881f612766338b69980. Apr 17 23:28:09.692791 containerd[1473]: time="2026-04-17T23:28:09.691628193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8ddd65ff-cgvqb,Uid:f88cc04e-7a14-485a-8daa-e700592ac36b,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2\"" Apr 17 23:28:09.696087 containerd[1473]: time="2026-04-17T23:28:09.695344154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c9fc6889-rlp8h,Uid:970a4fba-59d8-4f44-827c-ca2df599dacf,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:09.706300 containerd[1473]: time="2026-04-17T23:28:09.705941535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:09.708665 containerd[1473]: time="2026-04-17T23:28:09.707325015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:09.708665 containerd[1473]: time="2026-04-17T23:28:09.707522123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.708665 containerd[1473]: time="2026-04-17T23:28:09.707644190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.714298 containerd[1473]: time="2026-04-17T23:28:09.713114795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:09.716938 containerd[1473]: time="2026-04-17T23:28:09.715394888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:09.716938 containerd[1473]: time="2026-04-17T23:28:09.715419701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.716938 containerd[1473]: time="2026-04-17T23:28:09.715829767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.726884 systemd-networkd[1366]: cali44873c82834: Link UP Apr 17 23:28:09.729518 systemd-networkd[1366]: cali44873c82834: Gained carrier Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:08.825 [ERROR][3869] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:08.875 [INFO][3869] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0 goldmane-9f7667bb8- calico-system 4bad2253-83ea-4317-8f1d-4aea885a0488 893 0 2026-04-17 23:27:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-ddb46eeabf goldmane-9f7667bb8-ptlpv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali44873c82834 [] [] }} ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Namespace="calico-system" Pod="goldmane-9f7667bb8-ptlpv" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:08.875 [INFO][3869] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Namespace="calico-system" Pod="goldmane-9f7667bb8-ptlpv" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.053 [INFO][3958] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" HandleID="k8s-pod-network.90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.114 [INFO][3958] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" HandleID="k8s-pod-network.90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c0340), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-ddb46eeabf", "pod":"goldmane-9f7667bb8-ptlpv", "timestamp":"2026-04-17 23:28:09.053164526 +0000 UTC"}, Hostname:"ci-4081-3-6-n-ddb46eeabf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.114 [INFO][3958] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.557 [INFO][3958] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.557 [INFO][3958] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-ddb46eeabf' Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.585 [INFO][3958] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.609 [INFO][3958] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.642 [INFO][3958] ipam/ipam.go 526: Trying affinity for 192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.646 [INFO][3958] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.651 [INFO][3958] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.651 [INFO][3958] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.656 [INFO][3958] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.671 [INFO][3958] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.691 [INFO][3958] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.6/26] block=192.168.52.0/26 handle="k8s-pod-network.90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.693 [INFO][3958] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.6/26] handle="k8s-pod-network.90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.694 [INFO][3958] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:09.770404 containerd[1473]: 2026-04-17 23:28:09.695 [INFO][3958] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.6/26] IPv6=[] ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" HandleID="k8s-pod-network.90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:09.771008 containerd[1473]: 2026-04-17 23:28:09.709 [INFO][3869] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Namespace="calico-system" Pod="goldmane-9f7667bb8-ptlpv" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"4bad2253-83ea-4317-8f1d-4aea885a0488", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"", Pod:"goldmane-9f7667bb8-ptlpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali44873c82834", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.771008 containerd[1473]: 2026-04-17 23:28:09.710 [INFO][3869] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.6/32] ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Namespace="calico-system" Pod="goldmane-9f7667bb8-ptlpv" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:09.771008 containerd[1473]: 2026-04-17 23:28:09.710 [INFO][3869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44873c82834 ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Namespace="calico-system" Pod="goldmane-9f7667bb8-ptlpv" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:09.771008 containerd[1473]: 2026-04-17 23:28:09.732 [INFO][3869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Namespace="calico-system" Pod="goldmane-9f7667bb8-ptlpv" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:09.771008 containerd[1473]: 2026-04-17 23:28:09.733 [INFO][3869] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Namespace="calico-system" Pod="goldmane-9f7667bb8-ptlpv" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"4bad2253-83ea-4317-8f1d-4aea885a0488", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c", Pod:"goldmane-9f7667bb8-ptlpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali44873c82834", MAC:"fe:c3:2f:ec:a6:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:09.771008 containerd[1473]: 2026-04-17 23:28:09.750 [INFO][3869] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c" Namespace="calico-system" Pod="goldmane-9f7667bb8-ptlpv" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:09.781276 systemd[1]: Started cri-containerd-f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3.scope - libcontainer container f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3. Apr 17 23:28:09.802271 systemd[1]: Started cri-containerd-720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e.scope - libcontainer container 720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e. Apr 17 23:28:09.832248 containerd[1473]: time="2026-04-17T23:28:09.831870868Z" level=info msg="StartContainer for \"f659e72e55f88c830aa80065f5762bca60203be9ceb1b881f612766338b69980\" returns successfully" Apr 17 23:28:09.853345 containerd[1473]: time="2026-04-17T23:28:09.853166205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:09.853723 containerd[1473]: time="2026-04-17T23:28:09.853498468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:09.853821 containerd[1473]: time="2026-04-17T23:28:09.853793470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.854009 containerd[1473]: time="2026-04-17T23:28:09.853983254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:09.888506 systemd[1]: Started cri-containerd-90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c.scope - libcontainer container 90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c. Apr 17 23:28:09.997075 kubelet[2519]: I0417 23:28:09.996886 2519 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4151f14-8e49-44f5-8af3-2ec8c78a0e0e" path="/var/lib/kubelet/pods/c4151f14-8e49-44f5-8af3-2ec8c78a0e0e/volumes" Apr 17 23:28:10.058501 systemd-networkd[1366]: cali3da483d8410: Link UP Apr 17 23:28:10.058982 systemd-networkd[1366]: cali3da483d8410: Gained carrier Apr 17 23:28:10.095500 containerd[1473]: time="2026-04-17T23:28:10.092031010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7878s,Uid:671ffde3-921c-4e59-9f78-76ef9c5efeb2,Namespace:kube-system,Attempt:1,} returns sandbox id \"f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3\"" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.875 [ERROR][4312] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.902 [INFO][4312] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0 whisker-86c9fc6889- calico-system 970a4fba-59d8-4f44-827c-ca2df599dacf 927 0 2026-04-17 23:28:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86c9fc6889 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-ddb46eeabf whisker-86c9fc6889-rlp8h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3da483d8410 [] [] }} ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Namespace="calico-system" Pod="whisker-86c9fc6889-rlp8h" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.902 [INFO][4312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Namespace="calico-system" Pod="whisker-86c9fc6889-rlp8h" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.942 [INFO][4385] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" HandleID="k8s-pod-network.a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.956 [INFO][4385] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" HandleID="k8s-pod-network.a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-ddb46eeabf", "pod":"whisker-86c9fc6889-rlp8h", "timestamp":"2026-04-17 23:28:09.942424034 +0000 UTC"}, Hostname:"ci-4081-3-6-n-ddb46eeabf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.956 [INFO][4385] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.956 [INFO][4385] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.956 [INFO][4385] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-ddb46eeabf' Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.960 [INFO][4385] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.967 [INFO][4385] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.977 [INFO][4385] ipam/ipam.go 526: Trying affinity for 192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.988 [INFO][4385] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.994 [INFO][4385] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:09.994 [INFO][4385] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:10.000 [INFO][4385] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9 Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:10.020 [INFO][4385] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:10.037 [INFO][4385] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.7/26] block=192.168.52.0/26 handle="k8s-pod-network.a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:10.037 [INFO][4385] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.7/26] handle="k8s-pod-network.a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:10.037 [INFO][4385] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:10.112241 containerd[1473]: 2026-04-17 23:28:10.038 [INFO][4385] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.7/26] IPv6=[] ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" HandleID="k8s-pod-network.a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" Apr 17 23:28:10.113507 containerd[1473]: 2026-04-17 23:28:10.050 [INFO][4312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Namespace="calico-system" Pod="whisker-86c9fc6889-rlp8h" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0", GenerateName:"whisker-86c9fc6889-", Namespace:"calico-system", SelfLink:"", UID:"970a4fba-59d8-4f44-827c-ca2df599dacf", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c9fc6889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"", Pod:"whisker-86c9fc6889-rlp8h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3da483d8410", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:10.113507 containerd[1473]: 2026-04-17 23:28:10.051 [INFO][4312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.7/32] ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Namespace="calico-system" Pod="whisker-86c9fc6889-rlp8h" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" Apr 17 23:28:10.113507 containerd[1473]: 2026-04-17 23:28:10.051 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3da483d8410 ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Namespace="calico-system" Pod="whisker-86c9fc6889-rlp8h" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" Apr 17 23:28:10.113507 containerd[1473]: 2026-04-17 23:28:10.059 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Namespace="calico-system" Pod="whisker-86c9fc6889-rlp8h" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" Apr 17 23:28:10.113507 containerd[1473]: 2026-04-17 23:28:10.063 [INFO][4312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Namespace="calico-system" Pod="whisker-86c9fc6889-rlp8h" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0", GenerateName:"whisker-86c9fc6889-", Namespace:"calico-system", SelfLink:"", UID:"970a4fba-59d8-4f44-827c-ca2df599dacf", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c9fc6889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9", Pod:"whisker-86c9fc6889-rlp8h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.52.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3da483d8410", MAC:"1e:5a:a9:f3:fc:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:10.113507 containerd[1473]: 2026-04-17 23:28:10.097 [INFO][4312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9" Namespace="calico-system" Pod="whisker-86c9fc6889-rlp8h" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--86c9fc6889--rlp8h-eth0" Apr 17 23:28:10.116352 containerd[1473]: time="2026-04-17T23:28:10.116244408Z" level=info msg="CreateContainer within sandbox \"f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:28:10.136268 containerd[1473]: time="2026-04-17T23:28:10.136218449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7649d4fc56-khl48,Uid:6cae17b8-aaee-4419-851b-914e2d779768,Namespace:calico-system,Attempt:1,} returns sandbox id \"720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e\"" Apr 17 23:28:10.165879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259520356.mount: Deactivated successfully. Apr 17 23:28:10.186823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999782404.mount: Deactivated successfully. Apr 17 23:28:10.193179 containerd[1473]: time="2026-04-17T23:28:10.193118159Z" level=info msg="CreateContainer within sandbox \"f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"570d39a0f66b002f081a57562e22ee9f2174be0b958e861cb983f8901f4742aa\"" Apr 17 23:28:10.196193 containerd[1473]: time="2026-04-17T23:28:10.193900255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:10.196193 containerd[1473]: time="2026-04-17T23:28:10.193969928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:10.196193 containerd[1473]: time="2026-04-17T23:28:10.193997702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:10.196193 containerd[1473]: time="2026-04-17T23:28:10.194177828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:10.198747 containerd[1473]: time="2026-04-17T23:28:10.197342669Z" level=info msg="StartContainer for \"570d39a0f66b002f081a57562e22ee9f2174be0b958e861cb983f8901f4742aa\"" Apr 17 23:28:10.205081 containerd[1473]: time="2026-04-17T23:28:10.203482461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-ptlpv,Uid:4bad2253-83ea-4317-8f1d-4aea885a0488,Namespace:calico-system,Attempt:1,} returns sandbox id \"90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c\"" Apr 17 23:28:10.253278 systemd[1]: Started cri-containerd-a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9.scope - libcontainer container a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9. Apr 17 23:28:10.283795 systemd[1]: Started cri-containerd-570d39a0f66b002f081a57562e22ee9f2174be0b958e861cb983f8901f4742aa.scope - libcontainer container 570d39a0f66b002f081a57562e22ee9f2174be0b958e861cb983f8901f4742aa. Apr 17 23:28:10.291245 systemd-networkd[1366]: cali9a16fa50a0b: Gained IPv6LL Apr 17 23:28:10.298409 kubelet[2519]: I0417 23:28:10.298194 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6z49m" podStartSLOduration=38.298173976 podStartE2EDuration="38.298173976s" podCreationTimestamp="2026-04-17 23:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:10.290066599 +0000 UTC m=+44.445889080" watchObservedRunningTime="2026-04-17 23:28:10.298173976 +0000 UTC m=+44.453996457" Apr 17 23:28:10.357600 containerd[1473]: time="2026-04-17T23:28:10.357478802Z" level=info msg="StartContainer for \"570d39a0f66b002f081a57562e22ee9f2174be0b958e861cb983f8901f4742aa\" returns successfully" Apr 17 23:28:10.393647 containerd[1473]: time="2026-04-17T23:28:10.393601725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c9fc6889-rlp8h,Uid:970a4fba-59d8-4f44-827c-ca2df599dacf,Namespace:calico-system,Attempt:0,} returns sandbox id \"a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9\"" Apr 17 23:28:10.547990 systemd-networkd[1366]: califaed04d94e0: Gained IPv6LL Apr 17 23:28:10.590100 kernel: calico-node[3970]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:28:10.675700 systemd-networkd[1366]: cali44528935d38: Gained IPv6LL Apr 17 23:28:10.997413 systemd-networkd[1366]: vxlan.calico: Link UP Apr 17 23:28:10.997633 systemd-networkd[1366]: vxlan.calico: Gained carrier Apr 17 23:28:11.060234 systemd-networkd[1366]: caliad01d59865d: Gained IPv6LL Apr 17 23:28:11.063610 systemd-networkd[1366]: cali44873c82834: Gained IPv6LL Apr 17 23:28:11.314099 kubelet[2519]: I0417 23:28:11.312918 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-7878s" podStartSLOduration=39.312902672 podStartE2EDuration="39.312902672s" podCreationTimestamp="2026-04-17 23:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:11.311815302 +0000 UTC m=+45.467637783" watchObservedRunningTime="2026-04-17 23:28:11.312902672 +0000 UTC m=+45.468725113" Apr 17 23:28:11.315594 systemd-networkd[1366]: cali6f7605506b5: Gained IPv6LL Apr 17 23:28:11.443409 systemd-networkd[1366]: cali3da483d8410: Gained IPv6LL Apr 17 23:28:12.275510 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Apr 17 23:28:12.743512 containerd[1473]: time="2026-04-17T23:28:12.743457218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:12.744843 containerd[1473]: time="2026-04-17T23:28:12.744600824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 17 23:28:12.746100 containerd[1473]: time="2026-04-17T23:28:12.745899683Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:12.749712 containerd[1473]: time="2026-04-17T23:28:12.748846129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:12.749965 containerd[1473]: time="2026-04-17T23:28:12.749933701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.231443692s" Apr 17 23:28:12.750098 containerd[1473]: time="2026-04-17T23:28:12.750038084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:28:12.751618 containerd[1473]: time="2026-04-17T23:28:12.751580329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:28:12.761626 containerd[1473]: time="2026-04-17T23:28:12.761564635Z" level=info msg="CreateContainer within sandbox \"f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:28:12.788866 containerd[1473]: time="2026-04-17T23:28:12.788825939Z" level=info msg="CreateContainer within sandbox \"f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3ad205b3c92971d884d519b55b8d94700e4403d7ce9ddb2b52d7c9a9815d805b\"" Apr 17 23:28:12.792023 containerd[1473]: time="2026-04-17T23:28:12.791376188Z" level=info msg="StartContainer for \"3ad205b3c92971d884d519b55b8d94700e4403d7ce9ddb2b52d7c9a9815d805b\"" Apr 17 23:28:12.836424 systemd[1]: Started cri-containerd-3ad205b3c92971d884d519b55b8d94700e4403d7ce9ddb2b52d7c9a9815d805b.scope - libcontainer container 3ad205b3c92971d884d519b55b8d94700e4403d7ce9ddb2b52d7c9a9815d805b. Apr 17 23:28:12.877517 containerd[1473]: time="2026-04-17T23:28:12.877421857Z" level=info msg="StartContainer for \"3ad205b3c92971d884d519b55b8d94700e4403d7ce9ddb2b52d7c9a9815d805b\" returns successfully" Apr 17 23:28:13.142250 containerd[1473]: time="2026-04-17T23:28:13.142200901Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:13.143324 containerd[1473]: time="2026-04-17T23:28:13.143294840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:28:13.145832 containerd[1473]: time="2026-04-17T23:28:13.145794336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 394.171183ms" Apr 17 23:28:13.145889 containerd[1473]: time="2026-04-17T23:28:13.145844685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:28:13.147232 containerd[1473]: time="2026-04-17T23:28:13.147201774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:28:13.153954 containerd[1473]: time="2026-04-17T23:28:13.153831930Z" level=info msg="CreateContainer within sandbox \"ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:28:13.174493 containerd[1473]: time="2026-04-17T23:28:13.174438043Z" level=info msg="CreateContainer within sandbox \"ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"34627090f4b9d6aad3c2df670eabb7d4a839dcc6de3a3b796022bc0767ba7ed4\"" Apr 17 23:28:13.180050 containerd[1473]: time="2026-04-17T23:28:13.180011080Z" level=info msg="StartContainer for \"34627090f4b9d6aad3c2df670eabb7d4a839dcc6de3a3b796022bc0767ba7ed4\"" Apr 17 23:28:13.225261 systemd[1]: Started cri-containerd-34627090f4b9d6aad3c2df670eabb7d4a839dcc6de3a3b796022bc0767ba7ed4.scope - libcontainer container 34627090f4b9d6aad3c2df670eabb7d4a839dcc6de3a3b796022bc0767ba7ed4. Apr 17 23:28:13.291141 containerd[1473]: time="2026-04-17T23:28:13.291002557Z" level=info msg="StartContainer for \"34627090f4b9d6aad3c2df670eabb7d4a839dcc6de3a3b796022bc0767ba7ed4\" returns successfully" Apr 17 23:28:13.354568 kubelet[2519]: I0417 23:28:13.354382 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f8ddd65ff-cgvqb" podStartSLOduration=23.904658345 podStartE2EDuration="27.354357848s" podCreationTimestamp="2026-04-17 23:27:46 +0000 UTC" firstStartedPulling="2026-04-17 23:28:09.696864509 +0000 UTC m=+43.852686990" lastFinishedPulling="2026-04-17 23:28:13.146564012 +0000 UTC m=+47.302386493" observedRunningTime="2026-04-17 23:28:13.35365445 +0000 UTC m=+47.509476971" watchObservedRunningTime="2026-04-17 23:28:13.354357848 +0000 UTC m=+47.510180329" Apr 17 23:28:13.354947 kubelet[2519]: I0417 23:28:13.354632 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f8ddd65ff-8w2zs" podStartSLOduration=24.120069157 podStartE2EDuration="27.35462576s" podCreationTimestamp="2026-04-17 23:27:46 +0000 UTC" firstStartedPulling="2026-04-17 23:28:09.51696461 +0000 UTC m=+43.672787091" lastFinishedPulling="2026-04-17 23:28:12.751521213 +0000 UTC m=+46.907343694" observedRunningTime="2026-04-17 23:28:13.332458002 +0000 UTC m=+47.488280443" watchObservedRunningTime="2026-04-17 23:28:13.35462576 +0000 UTC m=+47.510448241" Apr 17 23:28:16.829672 containerd[1473]: time="2026-04-17T23:28:16.829599159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:16.831468 containerd[1473]: time="2026-04-17T23:28:16.831388257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 17 23:28:16.832559 containerd[1473]: time="2026-04-17T23:28:16.832465933Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:16.835093 containerd[1473]: time="2026-04-17T23:28:16.834991383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:16.837397 containerd[1473]: time="2026-04-17T23:28:16.836855757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.689614642s" Apr 17 23:28:16.837397 containerd[1473]: time="2026-04-17T23:28:16.836895056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 17 23:28:16.838814 containerd[1473]: time="2026-04-17T23:28:16.838629567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:28:16.860816 containerd[1473]: time="2026-04-17T23:28:16.860776261Z" level=info msg="CreateContainer within sandbox \"720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:28:16.879506 containerd[1473]: time="2026-04-17T23:28:16.879323829Z" level=info msg="CreateContainer within sandbox \"720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d2542f4b572cdc241a5c867822cedc425750f37c1f1fa8840e7e8d5dc6d09f24\"" Apr 17 23:28:16.882395 containerd[1473]: time="2026-04-17T23:28:16.880223461Z" level=info msg="StartContainer for \"d2542f4b572cdc241a5c867822cedc425750f37c1f1fa8840e7e8d5dc6d09f24\"" Apr 17 23:28:16.912465 systemd[1]: Started cri-containerd-d2542f4b572cdc241a5c867822cedc425750f37c1f1fa8840e7e8d5dc6d09f24.scope - libcontainer container d2542f4b572cdc241a5c867822cedc425750f37c1f1fa8840e7e8d5dc6d09f24. Apr 17 23:28:16.956082 containerd[1473]: time="2026-04-17T23:28:16.956034152Z" level=info msg="StartContainer for \"d2542f4b572cdc241a5c867822cedc425750f37c1f1fa8840e7e8d5dc6d09f24\" returns successfully" Apr 17 23:28:17.354034 kubelet[2519]: I0417 23:28:17.353844 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7649d4fc56-khl48" podStartSLOduration=22.654850197000002 podStartE2EDuration="29.353827345s" podCreationTimestamp="2026-04-17 23:27:48 +0000 UTC" firstStartedPulling="2026-04-17 23:28:10.139192318 +0000 UTC m=+44.295014799" lastFinishedPulling="2026-04-17 23:28:16.838169426 +0000 UTC m=+50.993991947" observedRunningTime="2026-04-17 23:28:17.351116755 +0000 UTC m=+51.506939236" watchObservedRunningTime="2026-04-17 23:28:17.353827345 +0000 UTC m=+51.509649826" Apr 17 23:28:19.146468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571453575.mount: Deactivated successfully. Apr 17 23:28:19.459089 containerd[1473]: time="2026-04-17T23:28:19.458618743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:19.460119 containerd[1473]: time="2026-04-17T23:28:19.460052167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 17 23:28:19.461252 containerd[1473]: time="2026-04-17T23:28:19.460932446Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:19.464190 containerd[1473]: time="2026-04-17T23:28:19.464150797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:19.464969 containerd[1473]: time="2026-04-17T23:28:19.464934276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.626268733s" Apr 17 23:28:19.465049 containerd[1473]: time="2026-04-17T23:28:19.464969370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 17 23:28:19.467618 containerd[1473]: time="2026-04-17T23:28:19.467515527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:28:19.471410 containerd[1473]: time="2026-04-17T23:28:19.471256891Z" level=info msg="CreateContainer within sandbox \"90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:28:19.508205 containerd[1473]: time="2026-04-17T23:28:19.508043516Z" level=info msg="CreateContainer within sandbox \"90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f077f6534b369e32b99881481a132bffac0d0a2652732f80f881139244179762\"" Apr 17 23:28:19.509296 containerd[1473]: time="2026-04-17T23:28:19.509194344Z" level=info msg="StartContainer for \"f077f6534b369e32b99881481a132bffac0d0a2652732f80f881139244179762\"" Apr 17 23:28:19.575247 systemd[1]: Started cri-containerd-f077f6534b369e32b99881481a132bffac0d0a2652732f80f881139244179762.scope - libcontainer container f077f6534b369e32b99881481a132bffac0d0a2652732f80f881139244179762. Apr 17 23:28:19.612006 containerd[1473]: time="2026-04-17T23:28:19.611942957Z" level=info msg="StartContainer for \"f077f6534b369e32b99881481a132bffac0d0a2652732f80f881139244179762\" returns successfully" Apr 17 23:28:21.310777 containerd[1473]: time="2026-04-17T23:28:21.309759665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:21.310777 containerd[1473]: time="2026-04-17T23:28:21.310697489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 17 23:28:21.311983 containerd[1473]: time="2026-04-17T23:28:21.311942625Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:21.317223 containerd[1473]: time="2026-04-17T23:28:21.317188668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:21.317756 containerd[1473]: time="2026-04-17T23:28:21.317718943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.850169882s" Apr 17 23:28:21.317819 containerd[1473]: time="2026-04-17T23:28:21.317755676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 17 23:28:21.331607 containerd[1473]: time="2026-04-17T23:28:21.331551253Z" level=info msg="CreateContainer within sandbox \"a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:28:21.350279 containerd[1473]: time="2026-04-17T23:28:21.350223498Z" level=info msg="CreateContainer within sandbox \"a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"11dfb1be8ef6bbb7057df5b0eead46696676146d84a6750f7e4186a5d165afbd\"" Apr 17 23:28:21.352662 containerd[1473]: time="2026-04-17T23:28:21.352295577Z" level=info msg="StartContainer for \"11dfb1be8ef6bbb7057df5b0eead46696676146d84a6750f7e4186a5d165afbd\"" Apr 17 23:28:21.405536 systemd[1]: Started cri-containerd-11dfb1be8ef6bbb7057df5b0eead46696676146d84a6750f7e4186a5d165afbd.scope - libcontainer container 11dfb1be8ef6bbb7057df5b0eead46696676146d84a6750f7e4186a5d165afbd. Apr 17 23:28:21.408650 systemd[1]: Started sshd@7-142.132.185.111:22-50.85.169.122:43850.service - OpenSSH per-connection server daemon (50.85.169.122:43850). Apr 17 23:28:21.540736 sshd[4958]: Accepted publickey for core from 50.85.169.122 port 43850 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:21.546489 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:21.556991 systemd-logind[1453]: New session 8 of user core. Apr 17 23:28:21.562722 containerd[1473]: time="2026-04-17T23:28:21.562506433Z" level=info msg="StartContainer for \"11dfb1be8ef6bbb7057df5b0eead46696676146d84a6750f7e4186a5d165afbd\" returns successfully" Apr 17 23:28:21.563274 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:28:21.569897 containerd[1473]: time="2026-04-17T23:28:21.569842882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:28:21.759512 sshd[4958]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:21.765494 systemd[1]: sshd@7-142.132.185.111:22-50.85.169.122:43850.service: Deactivated successfully. Apr 17 23:28:21.769321 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:28:21.770568 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:28:21.772327 systemd-logind[1453]: Removed session 8. Apr 17 23:28:21.981492 containerd[1473]: time="2026-04-17T23:28:21.980758309Z" level=info msg="StopPodSandbox for \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\"" Apr 17 23:28:22.055886 kubelet[2519]: I0417 23:28:22.055079 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-ptlpv" podStartSLOduration=25.79593949 podStartE2EDuration="35.055035458s" podCreationTimestamp="2026-04-17 23:27:47 +0000 UTC" firstStartedPulling="2026-04-17 23:28:10.207402945 +0000 UTC m=+44.363225426" lastFinishedPulling="2026-04-17 23:28:19.466498913 +0000 UTC m=+53.622321394" observedRunningTime="2026-04-17 23:28:20.370366596 +0000 UTC m=+54.526189077" watchObservedRunningTime="2026-04-17 23:28:22.055035458 +0000 UTC m=+56.210857979" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.053 [INFO][5017] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.054 [INFO][5017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" iface="eth0" netns="/var/run/netns/cni-e2c1b24f-61b1-4c3f-7305-3df37c05305a" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.054 [INFO][5017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" iface="eth0" netns="/var/run/netns/cni-e2c1b24f-61b1-4c3f-7305-3df37c05305a" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.058 [INFO][5017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" iface="eth0" netns="/var/run/netns/cni-e2c1b24f-61b1-4c3f-7305-3df37c05305a" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.058 [INFO][5017] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.058 [INFO][5017] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.079 [INFO][5024] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" HandleID="k8s-pod-network.f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.079 [INFO][5024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.079 [INFO][5024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.091 [WARNING][5024] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" HandleID="k8s-pod-network.f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.091 [INFO][5024] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" HandleID="k8s-pod-network.f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.093 [INFO][5024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:22.099565 containerd[1473]: 2026-04-17 23:28:22.097 [INFO][5017] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:22.100661 containerd[1473]: time="2026-04-17T23:28:22.100615443Z" level=info msg="TearDown network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\" successfully" Apr 17 23:28:22.100661 containerd[1473]: time="2026-04-17T23:28:22.100649815Z" level=info msg="StopPodSandbox for \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\" returns successfully" Apr 17 23:28:22.105909 systemd[1]: run-netns-cni\x2de2c1b24f\x2d61b1\x2d4c3f\x2d7305\x2d3df37c05305a.mount: Deactivated successfully. Apr 17 23:28:22.107373 containerd[1473]: time="2026-04-17T23:28:22.107215340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h44sh,Uid:35637d90-8ee7-47e3-a79b-be82b4dd0107,Namespace:calico-system,Attempt:1,}" Apr 17 23:28:22.268628 systemd-networkd[1366]: calia6fd467f9ea: Link UP Apr 17 23:28:22.273173 systemd-networkd[1366]: calia6fd467f9ea: Gained carrier Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.170 [INFO][5030] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0 csi-node-driver- calico-system 35637d90-8ee7-47e3-a79b-be82b4dd0107 1083 0 2026-04-17 23:27:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-ddb46eeabf csi-node-driver-h44sh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia6fd467f9ea [] [] }} ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Namespace="calico-system" Pod="csi-node-driver-h44sh" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.170 [INFO][5030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Namespace="calico-system" Pod="csi-node-driver-h44sh" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.197 [INFO][5042] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" HandleID="k8s-pod-network.2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.209 [INFO][5042] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" HandleID="k8s-pod-network.2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-ddb46eeabf", "pod":"csi-node-driver-h44sh", "timestamp":"2026-04-17 23:28:22.197987535 +0000 UTC"}, Hostname:"ci-4081-3-6-n-ddb46eeabf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000363080)} Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.209 [INFO][5042] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.209 [INFO][5042] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.209 [INFO][5042] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-ddb46eeabf' Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.212 [INFO][5042] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.224 [INFO][5042] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.229 [INFO][5042] ipam/ipam.go 526: Trying affinity for 192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.233 [INFO][5042] ipam/ipam.go 160: Attempting to load block cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.236 [INFO][5042] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.52.0/26 host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.236 [INFO][5042] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.52.0/26 handle="k8s-pod-network.2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.239 [INFO][5042] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543 Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.246 [INFO][5042] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.52.0/26 handle="k8s-pod-network.2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.257 [INFO][5042] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.52.8/26] block=192.168.52.0/26 handle="k8s-pod-network.2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.258 [INFO][5042] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.52.8/26] handle="k8s-pod-network.2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" host="ci-4081-3-6-n-ddb46eeabf" Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.258 [INFO][5042] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:22.294127 containerd[1473]: 2026-04-17 23:28:22.258 [INFO][5042] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.52.8/26] IPv6=[] ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" HandleID="k8s-pod-network.2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.294690 containerd[1473]: 2026-04-17 23:28:22.261 [INFO][5030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Namespace="calico-system" Pod="csi-node-driver-h44sh" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35637d90-8ee7-47e3-a79b-be82b4dd0107", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"", Pod:"csi-node-driver-h44sh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6fd467f9ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:22.294690 containerd[1473]: 2026-04-17 23:28:22.262 [INFO][5030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.52.8/32] ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Namespace="calico-system" Pod="csi-node-driver-h44sh" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.294690 containerd[1473]: 2026-04-17 23:28:22.262 [INFO][5030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6fd467f9ea ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Namespace="calico-system" Pod="csi-node-driver-h44sh" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.294690 containerd[1473]: 2026-04-17 23:28:22.263 [INFO][5030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Namespace="calico-system" Pod="csi-node-driver-h44sh" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.294690 containerd[1473]: 2026-04-17 23:28:22.264 [INFO][5030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Namespace="calico-system" Pod="csi-node-driver-h44sh" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35637d90-8ee7-47e3-a79b-be82b4dd0107", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543", Pod:"csi-node-driver-h44sh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6fd467f9ea", MAC:"fe:06:e4:cb:00:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:22.294690 containerd[1473]: 2026-04-17 23:28:22.286 [INFO][5030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543" Namespace="calico-system" Pod="csi-node-driver-h44sh" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:22.328189 containerd[1473]: time="2026-04-17T23:28:22.326706818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:22.328189 containerd[1473]: time="2026-04-17T23:28:22.326771841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:22.328189 containerd[1473]: time="2026-04-17T23:28:22.326782765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:22.328703 containerd[1473]: time="2026-04-17T23:28:22.326890842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:22.347915 systemd[1]: Started cri-containerd-2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543.scope - libcontainer container 2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543. Apr 17 23:28:22.416966 containerd[1473]: time="2026-04-17T23:28:22.416865640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h44sh,Uid:35637d90-8ee7-47e3-a79b-be82b4dd0107,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543\"" Apr 17 23:28:23.347623 systemd-networkd[1366]: calia6fd467f9ea: Gained IPv6LL Apr 17 23:28:23.787628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582518663.mount: Deactivated successfully. Apr 17 23:28:23.814347 containerd[1473]: time="2026-04-17T23:28:23.812958660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:23.814347 containerd[1473]: time="2026-04-17T23:28:23.814304105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 17 23:28:23.814975 containerd[1473]: time="2026-04-17T23:28:23.814943596Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:23.818404 containerd[1473]: time="2026-04-17T23:28:23.818372850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.248472067s" Apr 17 23:28:23.818528 containerd[1473]: time="2026-04-17T23:28:23.818511536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 17 23:28:23.823627 containerd[1473]: time="2026-04-17T23:28:23.823591656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:28:23.830861 containerd[1473]: time="2026-04-17T23:28:23.830814885Z" level=info msg="CreateContainer within sandbox \"a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:28:23.839143 containerd[1473]: time="2026-04-17T23:28:23.839000913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:23.847855 containerd[1473]: time="2026-04-17T23:28:23.847802504Z" level=info msg="CreateContainer within sandbox \"a442cde5a51266cbee632ac1eff31ff594c50947ad610734c5affb0637baa3e9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9992cd5a536277bbb374635e09bc52381a1da21251d4fb95b200ec4bb3853c5d\"" Apr 17 23:28:23.855147 containerd[1473]: time="2026-04-17T23:28:23.855086673Z" level=info msg="StartContainer for \"9992cd5a536277bbb374635e09bc52381a1da21251d4fb95b200ec4bb3853c5d\"" Apr 17 23:28:23.892405 systemd[1]: Started cri-containerd-9992cd5a536277bbb374635e09bc52381a1da21251d4fb95b200ec4bb3853c5d.scope - libcontainer container 9992cd5a536277bbb374635e09bc52381a1da21251d4fb95b200ec4bb3853c5d. Apr 17 23:28:23.937615 containerd[1473]: time="2026-04-17T23:28:23.937549826Z" level=info msg="StartContainer for \"9992cd5a536277bbb374635e09bc52381a1da21251d4fb95b200ec4bb3853c5d\" returns successfully" Apr 17 23:28:25.816410 containerd[1473]: time="2026-04-17T23:28:25.815229533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:25.816410 containerd[1473]: time="2026-04-17T23:28:25.816365472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 17 23:28:25.817203 containerd[1473]: time="2026-04-17T23:28:25.817141425Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:25.820822 containerd[1473]: time="2026-04-17T23:28:25.820763789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:25.822139 containerd[1473]: time="2026-04-17T23:28:25.821992036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.998225002s" Apr 17 23:28:25.822139 containerd[1473]: time="2026-04-17T23:28:25.822027487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 17 23:28:25.828624 containerd[1473]: time="2026-04-17T23:28:25.828586169Z" level=info msg="CreateContainer within sandbox \"2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:28:25.849582 containerd[1473]: time="2026-04-17T23:28:25.849406279Z" level=info msg="CreateContainer within sandbox \"2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2dfede76ccaf117fb58c349f350c9bd4636ff20860fbcf026bc94ab99aab0190\"" Apr 17 23:28:25.851040 containerd[1473]: time="2026-04-17T23:28:25.850034427Z" level=info msg="StartContainer for \"2dfede76ccaf117fb58c349f350c9bd4636ff20860fbcf026bc94ab99aab0190\"" Apr 17 23:28:25.895438 systemd[1]: Started cri-containerd-2dfede76ccaf117fb58c349f350c9bd4636ff20860fbcf026bc94ab99aab0190.scope - libcontainer container 2dfede76ccaf117fb58c349f350c9bd4636ff20860fbcf026bc94ab99aab0190. Apr 17 23:28:25.933743 containerd[1473]: time="2026-04-17T23:28:25.933692941Z" level=info msg="StartContainer for \"2dfede76ccaf117fb58c349f350c9bd4636ff20860fbcf026bc94ab99aab0190\" returns successfully" Apr 17 23:28:25.938869 containerd[1473]: time="2026-04-17T23:28:25.938595888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:28:25.994033 containerd[1473]: time="2026-04-17T23:28:25.992695196Z" level=info msg="StopPodSandbox for \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\"" Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.067 [WARNING][5214] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3c075d03-ef84-4fae-a46f-26a1d184ca06", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e", Pod:"coredns-7d764666f9-6z49m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad01d59865d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.067 [INFO][5214] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.067 [INFO][5214] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" iface="eth0" netns="" Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.067 [INFO][5214] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.067 [INFO][5214] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.106 [INFO][5221] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" HandleID="k8s-pod-network.5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.106 [INFO][5221] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.106 [INFO][5221] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.118 [WARNING][5221] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" HandleID="k8s-pod-network.5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.118 [INFO][5221] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" HandleID="k8s-pod-network.5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.120 [INFO][5221] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:26.124273 containerd[1473]: 2026-04-17 23:28:26.122 [INFO][5214] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:26.124872 containerd[1473]: time="2026-04-17T23:28:26.124837475Z" level=info msg="TearDown network for sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\" successfully" Apr 17 23:28:26.124940 containerd[1473]: time="2026-04-17T23:28:26.124926020Z" level=info msg="StopPodSandbox for \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\" returns successfully" Apr 17 23:28:26.125814 containerd[1473]: time="2026-04-17T23:28:26.125783424Z" level=info msg="RemovePodSandbox for \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\"" Apr 17 23:28:26.129243 containerd[1473]: time="2026-04-17T23:28:26.129205640Z" level=info msg="Forcibly stopping sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\"" Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.177 [WARNING][5235] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3c075d03-ef84-4fae-a46f-26a1d184ca06", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"204b2ebaf6ec05d67260cde709573c2c63b322dbbb28686c137f12406623656e", Pod:"coredns-7d764666f9-6z49m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad01d59865d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.177 [INFO][5235] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.177 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" iface="eth0" netns="" Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.177 [INFO][5235] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.177 [INFO][5235] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.202 [INFO][5242] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" HandleID="k8s-pod-network.5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.202 [INFO][5242] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.202 [INFO][5242] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.215 [WARNING][5242] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" HandleID="k8s-pod-network.5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.216 [INFO][5242] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" HandleID="k8s-pod-network.5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--6z49m-eth0" Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.218 [INFO][5242] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:26.224430 containerd[1473]: 2026-04-17 23:28:26.222 [INFO][5235] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3" Apr 17 23:28:26.225198 containerd[1473]: time="2026-04-17T23:28:26.224788315Z" level=info msg="TearDown network for sandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\" successfully" Apr 17 23:28:26.231349 containerd[1473]: time="2026-04-17T23:28:26.231023212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:28:26.231349 containerd[1473]: time="2026-04-17T23:28:26.231179297Z" level=info msg="RemovePodSandbox \"5dc151d112d419d74c50c2ca09b9fb78a81dd90155dcb82dcde6a2f25cbf8ee3\" returns successfully" Apr 17 23:28:26.232132 containerd[1473]: time="2026-04-17T23:28:26.231983526Z" level=info msg="StopPodSandbox for \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\"" Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.301 [WARNING][5256] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35637d90-8ee7-47e3-a79b-be82b4dd0107", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543", Pod:"csi-node-driver-h44sh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6fd467f9ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.301 [INFO][5256] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.301 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" iface="eth0" netns="" Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.301 [INFO][5256] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.302 [INFO][5256] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.325 [INFO][5266] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" HandleID="k8s-pod-network.f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.325 [INFO][5266] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.325 [INFO][5266] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.338 [WARNING][5266] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" HandleID="k8s-pod-network.f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.338 [INFO][5266] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" HandleID="k8s-pod-network.f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.341 [INFO][5266] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:26.346535 containerd[1473]: 2026-04-17 23:28:26.344 [INFO][5256] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:26.347905 containerd[1473]: time="2026-04-17T23:28:26.346565735Z" level=info msg="TearDown network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\" successfully" Apr 17 23:28:26.347905 containerd[1473]: time="2026-04-17T23:28:26.346589742Z" level=info msg="StopPodSandbox for \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\" returns successfully" Apr 17 23:28:26.347905 containerd[1473]: time="2026-04-17T23:28:26.347294703Z" level=info msg="RemovePodSandbox for \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\"" Apr 17 23:28:26.347905 containerd[1473]: time="2026-04-17T23:28:26.347325072Z" level=info msg="Forcibly stopping sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\"" Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.406 [WARNING][5280] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35637d90-8ee7-47e3-a79b-be82b4dd0107", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543", Pod:"csi-node-driver-h44sh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6fd467f9ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.407 [INFO][5280] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.407 [INFO][5280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" iface="eth0" netns="" Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.407 [INFO][5280] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.407 [INFO][5280] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.434 [INFO][5287] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" HandleID="k8s-pod-network.f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.434 [INFO][5287] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.434 [INFO][5287] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.450 [WARNING][5287] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" HandleID="k8s-pod-network.f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.450 [INFO][5287] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" HandleID="k8s-pod-network.f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-csi--node--driver--h44sh-eth0" Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.453 [INFO][5287] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:26.458944 containerd[1473]: 2026-04-17 23:28:26.456 [INFO][5280] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b" Apr 17 23:28:26.458944 containerd[1473]: time="2026-04-17T23:28:26.458891342Z" level=info msg="TearDown network for sandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\" successfully" Apr 17 23:28:26.465395 containerd[1473]: time="2026-04-17T23:28:26.465334218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:28:26.465734 containerd[1473]: time="2026-04-17T23:28:26.465424484Z" level=info msg="RemovePodSandbox \"f6d17d0065386c2b22a806430bc53fc1fcf4eef78ee3207a42ffcba36612c15b\" returns successfully" Apr 17 23:28:26.466421 containerd[1473]: time="2026-04-17T23:28:26.466081191Z" level=info msg="StopPodSandbox for \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\"" Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.506 [WARNING][5301] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0", GenerateName:"calico-apiserver-6f8ddd65ff-", Namespace:"calico-system", SelfLink:"", UID:"f88cc04e-7a14-485a-8daa-e700592ac36b", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8ddd65ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2", Pod:"calico-apiserver-6f8ddd65ff-cgvqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6f7605506b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.507 [INFO][5301] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.507 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" iface="eth0" netns="" Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.507 [INFO][5301] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.507 [INFO][5301] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.529 [INFO][5308] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" HandleID="k8s-pod-network.562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.529 [INFO][5308] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.529 [INFO][5308] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.541 [WARNING][5308] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" HandleID="k8s-pod-network.562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.541 [INFO][5308] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" HandleID="k8s-pod-network.562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.544 [INFO][5308] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:26.547725 containerd[1473]: 2026-04-17 23:28:26.546 [INFO][5301] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:26.548838 containerd[1473]: time="2026-04-17T23:28:26.548154657Z" level=info msg="TearDown network for sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\" successfully" Apr 17 23:28:26.548838 containerd[1473]: time="2026-04-17T23:28:26.548188147Z" level=info msg="StopPodSandbox for \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\" returns successfully" Apr 17 23:28:26.549272 containerd[1473]: time="2026-04-17T23:28:26.549229244Z" level=info msg="RemovePodSandbox for \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\"" Apr 17 23:28:26.549272 containerd[1473]: time="2026-04-17T23:28:26.549267214Z" level=info msg="Forcibly stopping sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\"" Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.592 [WARNING][5323] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0", GenerateName:"calico-apiserver-6f8ddd65ff-", Namespace:"calico-system", SelfLink:"", UID:"f88cc04e-7a14-485a-8daa-e700592ac36b", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8ddd65ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"ae56e5c6aae3fdf90bcc2b0dfdc625a446eda1b7941002ba2de21b956e6069c2", Pod:"calico-apiserver-6f8ddd65ff-cgvqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6f7605506b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.592 [INFO][5323] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.592 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" iface="eth0" netns="" Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.592 [INFO][5323] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.592 [INFO][5323] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.614 [INFO][5331] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" HandleID="k8s-pod-network.562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.614 [INFO][5331] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.614 [INFO][5331] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.624 [WARNING][5331] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" HandleID="k8s-pod-network.562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.624 [INFO][5331] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" HandleID="k8s-pod-network.562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--cgvqb-eth0" Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.626 [INFO][5331] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:26.630993 containerd[1473]: 2026-04-17 23:28:26.628 [INFO][5323] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e" Apr 17 23:28:26.632052 containerd[1473]: time="2026-04-17T23:28:26.631046757Z" level=info msg="TearDown network for sandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\" successfully" Apr 17 23:28:26.636895 containerd[1473]: time="2026-04-17T23:28:26.636853452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:28:26.637000 containerd[1473]: time="2026-04-17T23:28:26.636952000Z" level=info msg="RemovePodSandbox \"562ed5e4480ad6d2b666f58983d296031cc2674f12d1635ed002f7317d30635e\" returns successfully" Apr 17 23:28:26.637730 containerd[1473]: time="2026-04-17T23:28:26.637706295Z" level=info msg="StopPodSandbox for \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\"" Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.677 [WARNING][5345] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0", GenerateName:"calico-kube-controllers-7649d4fc56-", Namespace:"calico-system", SelfLink:"", UID:"6cae17b8-aaee-4419-851b-914e2d779768", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7649d4fc56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e", Pod:"calico-kube-controllers-7649d4fc56-khl48", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califaed04d94e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.678 [INFO][5345] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.678 [INFO][5345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" iface="eth0" netns="" Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.678 [INFO][5345] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.678 [INFO][5345] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.701 [INFO][5352] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" HandleID="k8s-pod-network.a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.701 [INFO][5352] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.701 [INFO][5352] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.713 [WARNING][5352] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" HandleID="k8s-pod-network.a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.713 [INFO][5352] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" HandleID="k8s-pod-network.a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.715 [INFO][5352] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:26.721120 containerd[1473]: 2026-04-17 23:28:26.718 [INFO][5345] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:26.721120 containerd[1473]: time="2026-04-17T23:28:26.721042281Z" level=info msg="TearDown network for sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\" successfully" Apr 17 23:28:26.721120 containerd[1473]: time="2026-04-17T23:28:26.721083653Z" level=info msg="StopPodSandbox for \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\" returns successfully" Apr 17 23:28:26.724435 containerd[1473]: time="2026-04-17T23:28:26.723200376Z" level=info msg="RemovePodSandbox for \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\"" Apr 17 23:28:26.724435 containerd[1473]: time="2026-04-17T23:28:26.723234626Z" level=info msg="Forcibly stopping sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\"" Apr 17 23:28:26.797311 systemd[1]: Started sshd@8-142.132.185.111:22-50.85.169.122:43854.service - OpenSSH per-connection server daemon (50.85.169.122:43854). Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.771 [WARNING][5366] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0", GenerateName:"calico-kube-controllers-7649d4fc56-", Namespace:"calico-system", SelfLink:"", UID:"6cae17b8-aaee-4419-851b-914e2d779768", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7649d4fc56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"720198fce8cbaf1e064c5375211f32051629b97569442625938f72b66fc3614e", Pod:"calico-kube-controllers-7649d4fc56-khl48", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.52.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califaed04d94e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.771 [INFO][5366] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.771 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" iface="eth0" netns="" Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.771 [INFO][5366] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.771 [INFO][5366] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.795 [INFO][5373] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" HandleID="k8s-pod-network.a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.795 [INFO][5373] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.795 [INFO][5373] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.809 [WARNING][5373] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" HandleID="k8s-pod-network.a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.809 [INFO][5373] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" HandleID="k8s-pod-network.a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--kube--controllers--7649d4fc56--khl48-eth0" Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.812 [INFO][5373] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:26.817231 containerd[1473]: 2026-04-17 23:28:26.815 [INFO][5366] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf" Apr 17 23:28:26.817905 containerd[1473]: time="2026-04-17T23:28:26.817269701Z" level=info msg="TearDown network for sandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\" successfully" Apr 17 23:28:26.821019 containerd[1473]: time="2026-04-17T23:28:26.820984359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:28:26.821138 containerd[1473]: time="2026-04-17T23:28:26.821084228Z" level=info msg="RemovePodSandbox \"a80da4f5f5838b3251c76cf43b02443d7215dcd7c8004a7fb0532824c12044bf\" returns successfully" Apr 17 23:28:26.821934 containerd[1473]: time="2026-04-17T23:28:26.821609577Z" level=info msg="StopPodSandbox for \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\"" Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.869 [WARNING][5390] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"4bad2253-83ea-4317-8f1d-4aea885a0488", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c", Pod:"goldmane-9f7667bb8-ptlpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali44873c82834", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.869 [INFO][5390] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.869 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" iface="eth0" netns="" Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.869 [INFO][5390] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.869 [INFO][5390] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.899 [INFO][5397] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" HandleID="k8s-pod-network.570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.900 [INFO][5397] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.900 [INFO][5397] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.914 [WARNING][5397] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" HandleID="k8s-pod-network.570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.914 [INFO][5397] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" HandleID="k8s-pod-network.570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.916 [INFO][5397] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:26.920875 containerd[1473]: 2026-04-17 23:28:26.918 [INFO][5390] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:26.922571 containerd[1473]: time="2026-04-17T23:28:26.921626197Z" level=info msg="TearDown network for sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\" successfully" Apr 17 23:28:26.922571 containerd[1473]: time="2026-04-17T23:28:26.921715622Z" level=info msg="StopPodSandbox for \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\" returns successfully" Apr 17 23:28:26.922898 containerd[1473]: time="2026-04-17T23:28:26.922821137Z" level=info msg="RemovePodSandbox for \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\"" Apr 17 23:28:26.922898 containerd[1473]: time="2026-04-17T23:28:26.922871791Z" level=info msg="Forcibly stopping sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\"" Apr 17 23:28:26.927924 sshd[5379]: Accepted publickey for core from 50.85.169.122 port 43854 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:26.930387 sshd[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:26.939498 systemd-logind[1453]: New session 9 of user core. Apr 17 23:28:26.945312 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:26.975 [WARNING][5411] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"4bad2253-83ea-4317-8f1d-4aea885a0488", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"90da9ac9c970d85c5dbdbc485d5a04ea25e2d5c537f4e00590f460c77d2b5e1c", Pod:"goldmane-9f7667bb8-ptlpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.52.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali44873c82834", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:26.976 [INFO][5411] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:26.976 [INFO][5411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" iface="eth0" netns="" Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:26.976 [INFO][5411] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:26.976 [INFO][5411] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:27.007 [INFO][5419] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" HandleID="k8s-pod-network.570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:27.007 [INFO][5419] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:27.007 [INFO][5419] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:27.018 [WARNING][5419] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" HandleID="k8s-pod-network.570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:27.019 [INFO][5419] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" HandleID="k8s-pod-network.570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-goldmane--9f7667bb8--ptlpv-eth0" Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:27.023 [INFO][5419] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:27.034049 containerd[1473]: 2026-04-17 23:28:27.029 [INFO][5411] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160" Apr 17 23:28:27.034523 containerd[1473]: time="2026-04-17T23:28:27.034032184Z" level=info msg="TearDown network for sandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\" successfully" Apr 17 23:28:27.045609 containerd[1473]: time="2026-04-17T23:28:27.044496186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:28:27.045609 containerd[1473]: time="2026-04-17T23:28:27.044582049Z" level=info msg="RemovePodSandbox \"570f44800de14a898d8d9acb2ea35c3406f95dbab344aec6be9870d4e8321160\" returns successfully" Apr 17 23:28:27.045609 containerd[1473]: time="2026-04-17T23:28:27.045204778Z" level=info msg="StopPodSandbox for \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\"" Apr 17 23:28:27.151354 sshd[5379]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.095 [WARNING][5441] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"671ffde3-921c-4e59-9f78-76ef9c5efeb2", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3", Pod:"coredns-7d764666f9-7878s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44528935d38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.095 [INFO][5441] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.095 [INFO][5441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" iface="eth0" netns="" Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.095 [INFO][5441] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.095 [INFO][5441] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.129 [INFO][5449] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" HandleID="k8s-pod-network.9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.130 [INFO][5449] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.130 [INFO][5449] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.145 [WARNING][5449] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" HandleID="k8s-pod-network.9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.145 [INFO][5449] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" HandleID="k8s-pod-network.9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.147 [INFO][5449] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:27.157254 containerd[1473]: 2026-04-17 23:28:27.150 [INFO][5441] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:27.157254 containerd[1473]: time="2026-04-17T23:28:27.155017718Z" level=info msg="TearDown network for sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\" successfully" Apr 17 23:28:27.157254 containerd[1473]: time="2026-04-17T23:28:27.155048967Z" level=info msg="StopPodSandbox for \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\" returns successfully" Apr 17 23:28:27.158945 containerd[1473]: time="2026-04-17T23:28:27.158358345Z" level=info msg="RemovePodSandbox for \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\"" Apr 17 23:28:27.158945 containerd[1473]: time="2026-04-17T23:28:27.158404518Z" level=info msg="Forcibly stopping sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\"" Apr 17 23:28:27.160197 systemd[1]: sshd@8-142.132.185.111:22-50.85.169.122:43854.service: Deactivated successfully. Apr 17 23:28:27.164658 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:28:27.167159 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:28:27.168733 systemd-logind[1453]: Removed session 9. Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.205 [WARNING][5466] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"671ffde3-921c-4e59-9f78-76ef9c5efeb2", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"f6c1815db2a699b79f9f46fdb58652613d6e453af79b153fb5e30a54006b4bd3", Pod:"coredns-7d764666f9-7878s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.52.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44528935d38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.206 [INFO][5466] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.206 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" iface="eth0" netns="" Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.206 [INFO][5466] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.206 [INFO][5466] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.229 [INFO][5473] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" HandleID="k8s-pod-network.9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.229 [INFO][5473] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.229 [INFO][5473] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.242 [WARNING][5473] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" HandleID="k8s-pod-network.9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.242 [INFO][5473] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" HandleID="k8s-pod-network.9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-coredns--7d764666f9--7878s-eth0" Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.244 [INFO][5473] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:27.249141 containerd[1473]: 2026-04-17 23:28:27.246 [INFO][5466] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb" Apr 17 23:28:27.249141 containerd[1473]: time="2026-04-17T23:28:27.248215786Z" level=info msg="TearDown network for sandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\" successfully" Apr 17 23:28:27.252617 containerd[1473]: time="2026-04-17T23:28:27.252578811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:28:27.252797 containerd[1473]: time="2026-04-17T23:28:27.252776665Z" level=info msg="RemovePodSandbox \"9f931d61123f885dc74bcbd62ea62a601c4081fe0f153071970243dbb6a76deb\" returns successfully" Apr 17 23:28:27.253412 containerd[1473]: time="2026-04-17T23:28:27.253385310Z" level=info msg="StopPodSandbox for \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\"" Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.296 [WARNING][5488] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0", GenerateName:"calico-apiserver-6f8ddd65ff-", Namespace:"calico-system", SelfLink:"", UID:"ed934d57-c244-432a-8985-874dc75eb161", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8ddd65ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129", Pod:"calico-apiserver-6f8ddd65ff-8w2zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a16fa50a0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.296 [INFO][5488] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.296 [INFO][5488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" iface="eth0" netns="" Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.296 [INFO][5488] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.296 [INFO][5488] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.320 [INFO][5495] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" HandleID="k8s-pod-network.254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.320 [INFO][5495] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.320 [INFO][5495] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.331 [WARNING][5495] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" HandleID="k8s-pod-network.254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.331 [INFO][5495] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" HandleID="k8s-pod-network.254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.333 [INFO][5495] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:27.338547 containerd[1473]: 2026-04-17 23:28:27.335 [INFO][5488] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:27.338547 containerd[1473]: time="2026-04-17T23:28:27.338393114Z" level=info msg="TearDown network for sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\" successfully" Apr 17 23:28:27.345047 containerd[1473]: time="2026-04-17T23:28:27.338421401Z" level=info msg="StopPodSandbox for \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\" returns successfully" Apr 17 23:28:27.345533 containerd[1473]: time="2026-04-17T23:28:27.345491081Z" level=info msg="RemovePodSandbox for \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\"" Apr 17 23:28:27.345578 containerd[1473]: time="2026-04-17T23:28:27.345536694Z" level=info msg="Forcibly stopping sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\"" Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.384 [WARNING][5509] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0", GenerateName:"calico-apiserver-6f8ddd65ff-", Namespace:"calico-system", SelfLink:"", UID:"ed934d57-c244-432a-8985-874dc75eb161", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 27, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8ddd65ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-ddb46eeabf", ContainerID:"f62444f6f9d184f3bd527d8ab4481308c9c1ad9d3cc025d435c6441953ed7129", Pod:"calico-apiserver-6f8ddd65ff-8w2zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.52.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a16fa50a0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.386 [INFO][5509] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.386 [INFO][5509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" iface="eth0" netns="" Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.386 [INFO][5509] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.386 [INFO][5509] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.432 [INFO][5516] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" HandleID="k8s-pod-network.254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.434 [INFO][5516] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.435 [INFO][5516] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.453 [WARNING][5516] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" HandleID="k8s-pod-network.254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.453 [INFO][5516] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" HandleID="k8s-pod-network.254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-calico--apiserver--6f8ddd65ff--8w2zs-eth0" Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.455 [INFO][5516] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:27.460007 containerd[1473]: 2026-04-17 23:28:27.458 [INFO][5509] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8" Apr 17 23:28:27.460644 containerd[1473]: time="2026-04-17T23:28:27.460049990Z" level=info msg="TearDown network for sandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\" successfully" Apr 17 23:28:27.467115 containerd[1473]: time="2026-04-17T23:28:27.467038167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:28:27.467346 containerd[1473]: time="2026-04-17T23:28:27.467148997Z" level=info msg="RemovePodSandbox \"254426413a560f5a71140e3262c7160f33ffa239028c9237c89484162abbd3f8\" returns successfully" Apr 17 23:28:27.468392 containerd[1473]: time="2026-04-17T23:28:27.467996948Z" level=info msg="StopPodSandbox for \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\"" Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.513 [WARNING][5536] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.513 [INFO][5536] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.513 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" iface="eth0" netns="" Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.513 [INFO][5536] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.513 [INFO][5536] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.536 [INFO][5544] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" HandleID="k8s-pod-network.28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.536 [INFO][5544] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.536 [INFO][5544] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.549 [WARNING][5544] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" HandleID="k8s-pod-network.28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.549 [INFO][5544] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" HandleID="k8s-pod-network.28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.552 [INFO][5544] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:27.556448 containerd[1473]: 2026-04-17 23:28:27.554 [INFO][5536] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:27.557788 containerd[1473]: time="2026-04-17T23:28:27.556571560Z" level=info msg="TearDown network for sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\" successfully" Apr 17 23:28:27.557788 containerd[1473]: time="2026-04-17T23:28:27.557344930Z" level=info msg="StopPodSandbox for \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\" returns successfully" Apr 17 23:28:27.559514 containerd[1473]: time="2026-04-17T23:28:27.559243486Z" level=info msg="RemovePodSandbox for \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\"" Apr 17 23:28:27.559514 containerd[1473]: time="2026-04-17T23:28:27.559282256Z" level=info msg="Forcibly stopping sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\"" Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.605 [WARNING][5558] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" WorkloadEndpoint="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.606 [INFO][5558] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.606 [INFO][5558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" iface="eth0" netns="" Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.606 [INFO][5558] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.606 [INFO][5558] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.630 [INFO][5565] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" HandleID="k8s-pod-network.28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.630 [INFO][5565] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.631 [INFO][5565] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.643 [WARNING][5565] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" HandleID="k8s-pod-network.28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.643 [INFO][5565] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" HandleID="k8s-pod-network.28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Workload="ci--4081--3--6--n--ddb46eeabf-k8s-whisker--585fd75c68--2fwmw-eth0" Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.646 [INFO][5565] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:28:27.652405 containerd[1473]: 2026-04-17 23:28:27.649 [INFO][5558] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f" Apr 17 23:28:27.652405 containerd[1473]: time="2026-04-17T23:28:27.652308918Z" level=info msg="TearDown network for sandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\" successfully" Apr 17 23:28:27.672006 containerd[1473]: time="2026-04-17T23:28:27.671707025Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:28:27.672006 containerd[1473]: time="2026-04-17T23:28:27.671829539Z" level=info msg="RemovePodSandbox \"28f36a00d809940c0793a9703510f9b002603829b4871e8e73d3c500f337f21f\" returns successfully" Apr 17 23:28:27.807886 containerd[1473]: time="2026-04-17T23:28:27.807081706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:27.809022 containerd[1473]: time="2026-04-17T23:28:27.808976621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 17 23:28:27.809890 containerd[1473]: time="2026-04-17T23:28:27.809861701Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:27.813959 containerd[1473]: time="2026-04-17T23:28:27.813922124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:27.814963 containerd[1473]: time="2026-04-17T23:28:27.814612911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.87596973s" Apr 17 23:28:27.815206 containerd[1473]: time="2026-04-17T23:28:27.815181586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 17 23:28:27.821782 containerd[1473]: time="2026-04-17T23:28:27.821537672Z" level=info msg="CreateContainer within sandbox \"2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:28:27.850462 containerd[1473]: time="2026-04-17T23:28:27.850401550Z" level=info msg="CreateContainer within sandbox \"2e7bc0afbe5b0a9182dba1b6cab0a3bc4ee8e5b4ec056ef3d1ec1e86021dc543\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cce8672cd3dc6c1dabdf31349a68b7f9efeb310dd4b85ea47f3afdccd8a80849\"" Apr 17 23:28:27.851300 containerd[1473]: time="2026-04-17T23:28:27.851236457Z" level=info msg="StartContainer for \"cce8672cd3dc6c1dabdf31349a68b7f9efeb310dd4b85ea47f3afdccd8a80849\"" Apr 17 23:28:27.891288 systemd[1]: Started cri-containerd-cce8672cd3dc6c1dabdf31349a68b7f9efeb310dd4b85ea47f3afdccd8a80849.scope - libcontainer container cce8672cd3dc6c1dabdf31349a68b7f9efeb310dd4b85ea47f3afdccd8a80849. Apr 17 23:28:27.922288 containerd[1473]: time="2026-04-17T23:28:27.921805980Z" level=info msg="StartContainer for \"cce8672cd3dc6c1dabdf31349a68b7f9efeb310dd4b85ea47f3afdccd8a80849\" returns successfully" Apr 17 23:28:28.124307 kubelet[2519]: I0417 23:28:28.124048 2519 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:28:28.124307 kubelet[2519]: I0417 23:28:28.124112 2519 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:28:28.438047 kubelet[2519]: I0417 23:28:28.437915 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-86c9fc6889-rlp8h" podStartSLOduration=6.013650796 podStartE2EDuration="19.437896959s" podCreationTimestamp="2026-04-17 23:28:09 +0000 UTC" firstStartedPulling="2026-04-17 23:28:10.39646234 +0000 UTC m=+44.552284821" lastFinishedPulling="2026-04-17 23:28:23.820708423 +0000 UTC m=+57.976530984" observedRunningTime="2026-04-17 23:28:24.389523593 +0000 UTC m=+58.545346034" watchObservedRunningTime="2026-04-17 23:28:28.437896959 +0000 UTC m=+62.593719480" Apr 17 23:28:28.439486 kubelet[2519]: I0417 23:28:28.439087 2519 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-h44sh" podStartSLOduration=35.041855242 podStartE2EDuration="40.438996163s" podCreationTimestamp="2026-04-17 23:27:48 +0000 UTC" firstStartedPulling="2026-04-17 23:28:22.419303448 +0000 UTC m=+56.575125929" lastFinishedPulling="2026-04-17 23:28:27.816444369 +0000 UTC m=+61.972266850" observedRunningTime="2026-04-17 23:28:28.438542246 +0000 UTC m=+62.594364767" watchObservedRunningTime="2026-04-17 23:28:28.438996163 +0000 UTC m=+62.594818684" Apr 17 23:28:32.187585 systemd[1]: Started sshd@9-142.132.185.111:22-50.85.169.122:38600.service - OpenSSH per-connection server daemon (50.85.169.122:38600). Apr 17 23:28:32.331224 sshd[5622]: Accepted publickey for core from 50.85.169.122 port 38600 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:32.334415 sshd[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:32.342384 systemd-logind[1453]: New session 10 of user core. Apr 17 23:28:32.352209 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:28:32.542555 sshd[5622]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:32.550695 systemd[1]: sshd@9-142.132.185.111:22-50.85.169.122:38600.service: Deactivated successfully. Apr 17 23:28:32.554489 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:28:32.555720 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:28:32.557823 systemd-logind[1453]: Removed session 10. Apr 17 23:28:37.574622 systemd[1]: Started sshd@10-142.132.185.111:22-50.85.169.122:38612.service - OpenSSH per-connection server daemon (50.85.169.122:38612). Apr 17 23:28:37.704705 sshd[5668]: Accepted publickey for core from 50.85.169.122 port 38612 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:37.706020 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:37.710212 systemd-logind[1453]: New session 11 of user core. Apr 17 23:28:37.717384 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:28:37.898321 sshd[5668]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:37.903670 systemd[1]: sshd@10-142.132.185.111:22-50.85.169.122:38612.service: Deactivated successfully. Apr 17 23:28:37.906939 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:28:37.908029 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:28:37.909099 systemd-logind[1453]: Removed session 11. Apr 17 23:28:37.934838 systemd[1]: Started sshd@11-142.132.185.111:22-50.85.169.122:38616.service - OpenSSH per-connection server daemon (50.85.169.122:38616). Apr 17 23:28:38.065771 sshd[5683]: Accepted publickey for core from 50.85.169.122 port 38616 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:38.068092 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:38.074618 systemd-logind[1453]: New session 12 of user core. Apr 17 23:28:38.082410 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:28:38.307712 sshd[5683]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:38.314857 systemd[1]: sshd@11-142.132.185.111:22-50.85.169.122:38616.service: Deactivated successfully. Apr 17 23:28:38.319150 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:28:38.321841 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:28:38.342858 systemd[1]: Started sshd@12-142.132.185.111:22-50.85.169.122:38624.service - OpenSSH per-connection server daemon (50.85.169.122:38624). Apr 17 23:28:38.343801 systemd-logind[1453]: Removed session 12. Apr 17 23:28:38.465278 sshd[5694]: Accepted publickey for core from 50.85.169.122 port 38624 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:38.467186 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:38.474869 systemd-logind[1453]: New session 13 of user core. Apr 17 23:28:38.478413 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:28:38.664607 sshd[5694]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:38.670915 systemd[1]: sshd@12-142.132.185.111:22-50.85.169.122:38624.service: Deactivated successfully. Apr 17 23:28:38.673005 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:28:38.673856 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:28:38.675139 systemd-logind[1453]: Removed session 13. Apr 17 23:28:43.704478 systemd[1]: Started sshd@13-142.132.185.111:22-50.85.169.122:35790.service - OpenSSH per-connection server daemon (50.85.169.122:35790). Apr 17 23:28:43.826882 sshd[5735]: Accepted publickey for core from 50.85.169.122 port 35790 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:43.829022 sshd[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:43.834289 systemd-logind[1453]: New session 14 of user core. Apr 17 23:28:43.838249 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:28:44.019560 sshd[5735]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:44.024358 systemd[1]: sshd@13-142.132.185.111:22-50.85.169.122:35790.service: Deactivated successfully. Apr 17 23:28:44.026460 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:28:44.027533 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:28:44.029593 systemd-logind[1453]: Removed session 14. Apr 17 23:28:44.057410 systemd[1]: Started sshd@14-142.132.185.111:22-50.85.169.122:35794.service - OpenSSH per-connection server daemon (50.85.169.122:35794). Apr 17 23:28:44.174725 sshd[5747]: Accepted publickey for core from 50.85.169.122 port 35794 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:44.176817 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:44.183046 systemd-logind[1453]: New session 15 of user core. Apr 17 23:28:44.188558 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:28:44.534286 sshd[5747]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:44.539682 systemd[1]: sshd@14-142.132.185.111:22-50.85.169.122:35794.service: Deactivated successfully. Apr 17 23:28:44.541821 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:28:44.542988 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:28:44.544157 systemd-logind[1453]: Removed session 15. Apr 17 23:28:44.568779 systemd[1]: Started sshd@15-142.132.185.111:22-50.85.169.122:35802.service - OpenSSH per-connection server daemon (50.85.169.122:35802). Apr 17 23:28:44.698720 sshd[5758]: Accepted publickey for core from 50.85.169.122 port 35802 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:44.701178 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:44.706256 systemd-logind[1453]: New session 16 of user core. Apr 17 23:28:44.713293 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:28:45.555477 sshd[5758]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:45.563940 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:28:45.564209 systemd[1]: sshd@15-142.132.185.111:22-50.85.169.122:35802.service: Deactivated successfully. Apr 17 23:28:45.565903 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:28:45.579162 systemd-logind[1453]: Removed session 16. Apr 17 23:28:45.586516 systemd[1]: Started sshd@16-142.132.185.111:22-50.85.169.122:35810.service - OpenSSH per-connection server daemon (50.85.169.122:35810). Apr 17 23:28:45.726005 sshd[5778]: Accepted publickey for core from 50.85.169.122 port 35810 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:45.727589 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:45.734014 systemd-logind[1453]: New session 17 of user core. Apr 17 23:28:45.740319 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:28:46.056676 sshd[5778]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:46.062360 systemd[1]: sshd@16-142.132.185.111:22-50.85.169.122:35810.service: Deactivated successfully. Apr 17 23:28:46.067029 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:28:46.070336 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:28:46.089340 systemd[1]: Started sshd@17-142.132.185.111:22-50.85.169.122:35812.service - OpenSSH per-connection server daemon (50.85.169.122:35812). Apr 17 23:28:46.090193 systemd-logind[1453]: Removed session 17. Apr 17 23:28:46.219220 sshd[5792]: Accepted publickey for core from 50.85.169.122 port 35812 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:46.221427 sshd[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:46.225862 systemd-logind[1453]: New session 18 of user core. Apr 17 23:28:46.232606 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:28:46.406457 sshd[5792]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:46.411855 systemd[1]: sshd@17-142.132.185.111:22-50.85.169.122:35812.service: Deactivated successfully. Apr 17 23:28:46.415340 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:28:46.416239 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:28:46.417461 systemd-logind[1453]: Removed session 18. Apr 17 23:28:51.373967 systemd[1]: run-containerd-runc-k8s.io-f077f6534b369e32b99881481a132bffac0d0a2652732f80f881139244179762-runc.gOiz2s.mount: Deactivated successfully. Apr 17 23:28:51.444290 systemd[1]: Started sshd@18-142.132.185.111:22-50.85.169.122:55390.service - OpenSSH per-connection server daemon (50.85.169.122:55390). Apr 17 23:28:51.585100 sshd[5881]: Accepted publickey for core from 50.85.169.122 port 55390 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:51.587616 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:51.593591 systemd-logind[1453]: New session 19 of user core. Apr 17 23:28:51.599405 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:28:51.776709 sshd[5881]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:51.782004 systemd[1]: sshd@18-142.132.185.111:22-50.85.169.122:55390.service: Deactivated successfully. Apr 17 23:28:51.785281 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:28:51.787121 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:28:51.788124 systemd-logind[1453]: Removed session 19. Apr 17 23:28:56.813468 systemd[1]: Started sshd@19-142.132.185.111:22-50.85.169.122:55396.service - OpenSSH per-connection server daemon (50.85.169.122:55396). Apr 17 23:28:56.934499 sshd[5896]: Accepted publickey for core from 50.85.169.122 port 55396 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:56.937026 sshd[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:56.944848 systemd-logind[1453]: New session 20 of user core. Apr 17 23:28:56.949342 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:28:57.133272 sshd[5896]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:57.139186 systemd[1]: sshd@19-142.132.185.111:22-50.85.169.122:55396.service: Deactivated successfully. Apr 17 23:28:57.142911 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:28:57.143998 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:28:57.146161 systemd-logind[1453]: Removed session 20. Apr 17 23:29:12.310201 kubelet[2519]: E0417 23:29:12.310106 2519 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:47118->10.0.0.2:2379: read: connection timed out" Apr 17 23:29:13.213110 systemd[1]: cri-containerd-d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00.scope: Deactivated successfully. Apr 17 23:29:13.213904 systemd[1]: cri-containerd-d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00.scope: Consumed 2.567s CPU time, 17.4M memory peak, 0B memory swap peak. Apr 17 23:29:13.249747 containerd[1473]: time="2026-04-17T23:29:13.249689214Z" level=info msg="shim disconnected" id=d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00 namespace=k8s.io Apr 17 23:29:13.249747 containerd[1473]: time="2026-04-17T23:29:13.249741138Z" level=warning msg="cleaning up after shim disconnected" id=d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00 namespace=k8s.io Apr 17 23:29:13.249747 containerd[1473]: time="2026-04-17T23:29:13.249749139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:29:13.251884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00-rootfs.mount: Deactivated successfully. Apr 17 23:29:13.571546 kubelet[2519]: I0417 23:29:13.571295 2519 scope.go:122] "RemoveContainer" containerID="d23ddc4bf10cf4a293ce229701b4118ea0ee4661b87d35073866a0d3775f3c00" Apr 17 23:29:13.575775 containerd[1473]: time="2026-04-17T23:29:13.575716909Z" level=info msg="CreateContainer within sandbox \"0b98d6fc53dd1f118c6bf10d7a6686072ddaf3c110d93ff58867b05ec35b0f10\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:29:13.597855 containerd[1473]: time="2026-04-17T23:29:13.597804656Z" level=info msg="CreateContainer within sandbox \"0b98d6fc53dd1f118c6bf10d7a6686072ddaf3c110d93ff58867b05ec35b0f10\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d05ff1008fdbee21a1282e9908bb2ab726951a39ca3b1444d8acfc5dbde132fe\"" Apr 17 23:29:13.599007 containerd[1473]: time="2026-04-17T23:29:13.598976871Z" level=info msg="StartContainer for \"d05ff1008fdbee21a1282e9908bb2ab726951a39ca3b1444d8acfc5dbde132fe\"" Apr 17 23:29:13.599323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838452466.mount: Deactivated successfully. Apr 17 23:29:13.634395 systemd[1]: Started cri-containerd-d05ff1008fdbee21a1282e9908bb2ab726951a39ca3b1444d8acfc5dbde132fe.scope - libcontainer container d05ff1008fdbee21a1282e9908bb2ab726951a39ca3b1444d8acfc5dbde132fe. Apr 17 23:29:13.671212 containerd[1473]: time="2026-04-17T23:29:13.671163590Z" level=info msg="StartContainer for \"d05ff1008fdbee21a1282e9908bb2ab726951a39ca3b1444d8acfc5dbde132fe\" returns successfully" Apr 17 23:29:13.703116 systemd[1]: cri-containerd-a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e.scope: Deactivated successfully. Apr 17 23:29:13.705168 systemd[1]: cri-containerd-a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e.scope: Consumed 8.852s CPU time. Apr 17 23:29:13.730971 containerd[1473]: time="2026-04-17T23:29:13.730904503Z" level=info msg="shim disconnected" id=a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e namespace=k8s.io Apr 17 23:29:13.730971 containerd[1473]: time="2026-04-17T23:29:13.730966748Z" level=warning msg="cleaning up after shim disconnected" id=a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e namespace=k8s.io Apr 17 23:29:13.731159 containerd[1473]: time="2026-04-17T23:29:13.730979469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:29:14.252321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e-rootfs.mount: Deactivated successfully. Apr 17 23:29:14.578864 kubelet[2519]: I0417 23:29:14.578582 2519 scope.go:122] "RemoveContainer" containerID="a74e0bbb3c445dc53d6e9c31a2e2e2e7dd8c27db04dfe7ab490f3159bed6b47e" Apr 17 23:29:14.580648 containerd[1473]: time="2026-04-17T23:29:14.580578735Z" level=info msg="CreateContainer within sandbox \"8606d61190e9cd81a6385a36bc44d49658228f0ede04325e1a5f91515b58deec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:29:14.597406 containerd[1473]: time="2026-04-17T23:29:14.597315138Z" level=info msg="CreateContainer within sandbox \"8606d61190e9cd81a6385a36bc44d49658228f0ede04325e1a5f91515b58deec\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0b379f98fb0dba4650c79784d459a956b24797c468e329d003875e8a4445cdb5\"" Apr 17 23:29:14.598136 containerd[1473]: time="2026-04-17T23:29:14.598019502Z" level=info msg="StartContainer for \"0b379f98fb0dba4650c79784d459a956b24797c468e329d003875e8a4445cdb5\"" Apr 17 23:29:14.630260 systemd[1]: Started cri-containerd-0b379f98fb0dba4650c79784d459a956b24797c468e329d003875e8a4445cdb5.scope - libcontainer container 0b379f98fb0dba4650c79784d459a956b24797c468e329d003875e8a4445cdb5. Apr 17 23:29:14.657992 containerd[1473]: time="2026-04-17T23:29:14.657937238Z" level=info msg="StartContainer for \"0b379f98fb0dba4650c79784d459a956b24797c468e329d003875e8a4445cdb5\" returns successfully" Apr 17 23:29:16.859731 systemd[1]: cri-containerd-e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d.scope: Deactivated successfully. Apr 17 23:29:16.862112 systemd[1]: cri-containerd-e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d.scope: Consumed 1.468s CPU time, 15.0M memory peak, 0B memory swap peak. Apr 17 23:29:16.885690 containerd[1473]: time="2026-04-17T23:29:16.885399055Z" level=info msg="shim disconnected" id=e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d namespace=k8s.io Apr 17 23:29:16.885690 containerd[1473]: time="2026-04-17T23:29:16.885465609Z" level=warning msg="cleaning up after shim disconnected" id=e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d namespace=k8s.io Apr 17 23:29:16.885690 containerd[1473]: time="2026-04-17T23:29:16.885476168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:29:16.887972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6b587828ee88fdce64b5e59ebcc3f65ee880f5b5a725d931710ba2eb16d7a1d-rootfs.mount: Deactivated successfully.