Apr 21 09:57:31.901480 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 21 09:57:31.901525 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 21 08:40:46 -00 2026 Apr 21 09:57:31.901540 kernel: KASLR enabled Apr 21 09:57:31.901548 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 21 09:57:31.901555 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 21 09:57:31.901562 kernel: random: crng init done Apr 21 09:57:31.901571 kernel: ACPI: Early table checksum verification disabled Apr 21 09:57:31.901578 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 21 09:57:31.901586 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 21 09:57:31.901596 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:31.901604 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:31.901611 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:31.901618 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:31.901626 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:31.901636 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:31.901645 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:31.901653 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:31.901661 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 09:57:31.901670 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 09:57:31.901677 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 21 09:57:31.901685 kernel: NUMA: Failed to initialise from firmware Apr 21 09:57:31.901693 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 21 09:57:31.901701 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 21 09:57:31.901709 kernel: Zone ranges: Apr 21 09:57:31.901716 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 21 09:57:31.901726 kernel: DMA32 empty Apr 21 09:57:31.901734 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 21 09:57:31.901742 kernel: Movable zone start for each node Apr 21 09:57:31.901750 kernel: Early memory node ranges Apr 21 09:57:31.901758 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 21 09:57:31.901766 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 21 09:57:31.901773 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 21 09:57:31.901781 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 21 09:57:31.901789 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 21 09:57:31.901797 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 21 09:57:31.901805 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 21 09:57:31.901813 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 21 09:57:31.901823 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 21 09:57:31.901831 kernel: psci: probing for conduit method from ACPI. Apr 21 09:57:31.901839 kernel: psci: PSCIv1.1 detected in firmware. Apr 21 09:57:31.901851 kernel: psci: Using standard PSCI v0.2 function IDs Apr 21 09:57:31.901860 kernel: psci: Trusted OS migration not required Apr 21 09:57:31.901868 kernel: psci: SMC Calling Convention v1.1 Apr 21 09:57:31.901878 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 21 09:57:31.901887 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 21 09:57:31.901895 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 21 09:57:31.901904 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 21 09:57:31.901912 kernel: Detected PIPT I-cache on CPU0 Apr 21 09:57:31.901921 kernel: CPU features: detected: GIC system register CPU interface Apr 21 09:57:31.901929 kernel: CPU features: detected: Hardware dirty bit management Apr 21 09:57:31.901938 kernel: CPU features: detected: Spectre-v4 Apr 21 09:57:31.901946 kernel: CPU features: detected: Spectre-BHB Apr 21 09:57:31.901954 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 21 09:57:31.901964 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 21 09:57:31.901973 kernel: CPU features: detected: ARM erratum 1418040 Apr 21 09:57:31.901981 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 21 09:57:31.901990 kernel: alternatives: applying boot alternatives Apr 21 09:57:31.901999 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 09:57:31.902008 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 09:57:31.902017 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 09:57:31.902025 kernel: Fallback order for Node 0: 0 Apr 21 09:57:31.902033 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 21 09:57:31.902042 kernel: Policy zone: Normal Apr 21 09:57:31.902050 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 09:57:31.902060 kernel: software IO TLB: area num 2. Apr 21 09:57:31.902069 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 21 09:57:31.902078 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Apr 21 09:57:31.902086 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 09:57:31.902094 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 09:57:31.902103 kernel: rcu: RCU event tracing is enabled. Apr 21 09:57:31.902112 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 09:57:31.902121 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 09:57:31.902129 kernel: Tracing variant of Tasks RCU enabled. Apr 21 09:57:31.902138 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 09:57:31.902146 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 09:57:31.902154 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 21 09:57:31.902165 kernel: GICv3: 256 SPIs implemented Apr 21 09:57:31.902173 kernel: GICv3: 0 Extended SPIs implemented Apr 21 09:57:31.902181 kernel: Root IRQ handler: gic_handle_irq Apr 21 09:57:31.902204 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 21 09:57:31.902214 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 21 09:57:31.902221 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 21 09:57:31.902228 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 21 09:57:31.902236 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 21 09:57:31.902243 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 21 09:57:31.902250 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 21 09:57:31.902256 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 09:57:31.902266 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 09:57:31.902273 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 21 09:57:31.902280 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 21 09:57:31.902287 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 21 09:57:31.902294 kernel: Console: colour dummy device 80x25 Apr 21 09:57:31.902301 kernel: ACPI: Core revision 20230628 Apr 21 09:57:31.902309 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 21 09:57:31.902316 kernel: pid_max: default: 32768 minimum: 301 Apr 21 09:57:31.902323 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 09:57:31.902330 kernel: landlock: Up and running. Apr 21 09:57:31.902338 kernel: SELinux: Initializing. Apr 21 09:57:31.902346 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 09:57:31.902353 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 09:57:31.902360 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 09:57:31.904466 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 09:57:31.904485 kernel: rcu: Hierarchical SRCU implementation. Apr 21 09:57:31.904493 kernel: rcu: Max phase no-delay instances is 400. Apr 21 09:57:31.904502 kernel: Platform MSI: ITS@0x8080000 domain created Apr 21 09:57:31.904509 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 21 09:57:31.904523 kernel: Remapping and enabling EFI services. Apr 21 09:57:31.904531 kernel: smp: Bringing up secondary CPUs ... Apr 21 09:57:31.904538 kernel: Detected PIPT I-cache on CPU1 Apr 21 09:57:31.904545 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 21 09:57:31.904553 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 21 09:57:31.904560 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 21 09:57:31.904567 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 21 09:57:31.904575 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 09:57:31.904582 kernel: SMP: Total of 2 processors activated. Apr 21 09:57:31.904589 kernel: CPU features: detected: 32-bit EL0 Support Apr 21 09:57:31.904598 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 21 09:57:31.904606 kernel: CPU features: detected: Common not Private translations Apr 21 09:57:31.904619 kernel: CPU features: detected: CRC32 instructions Apr 21 09:57:31.904629 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 21 09:57:31.904636 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 21 09:57:31.904644 kernel: CPU features: detected: LSE atomic instructions Apr 21 09:57:31.904651 kernel: CPU features: detected: Privileged Access Never Apr 21 09:57:31.904659 kernel: CPU features: detected: RAS Extension Support Apr 21 09:57:31.904669 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 21 09:57:31.904677 kernel: CPU: All CPU(s) started at EL1 Apr 21 09:57:31.904684 kernel: alternatives: applying system-wide alternatives Apr 21 09:57:31.904692 kernel: devtmpfs: initialized Apr 21 09:57:31.904700 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 09:57:31.904707 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 09:57:31.904715 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 09:57:31.904722 kernel: SMBIOS 3.0.0 present. Apr 21 09:57:31.904732 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 21 09:57:31.904740 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 09:57:31.904747 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 21 09:57:31.904755 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 21 09:57:31.904763 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 21 09:57:31.904770 kernel: audit: initializing netlink subsys (disabled) Apr 21 09:57:31.904778 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Apr 21 09:57:31.904786 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 09:57:31.904793 kernel: cpuidle: using governor menu Apr 21 09:57:31.904803 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 21 09:57:31.904810 kernel: ASID allocator initialised with 32768 entries Apr 21 09:57:31.904818 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 09:57:31.904826 kernel: Serial: AMBA PL011 UART driver Apr 21 09:57:31.904834 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 21 09:57:31.904841 kernel: Modules: 0 pages in range for non-PLT usage Apr 21 09:57:31.904849 kernel: Modules: 509008 pages in range for PLT usage Apr 21 09:57:31.904856 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 09:57:31.904864 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 09:57:31.904873 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 21 09:57:31.904881 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 21 09:57:31.904888 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 09:57:31.904896 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 09:57:31.904903 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 21 09:57:31.904910 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 21 09:57:31.904918 kernel: ACPI: Added _OSI(Module Device) Apr 21 09:57:31.904928 kernel: ACPI: Added _OSI(Processor Device) Apr 21 09:57:31.904936 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 09:57:31.904945 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 09:57:31.904953 kernel: ACPI: Interpreter enabled Apr 21 09:57:31.904961 kernel: ACPI: Using GIC for interrupt routing Apr 21 09:57:31.904968 kernel: ACPI: MCFG table detected, 1 entries Apr 21 09:57:31.904976 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 21 09:57:31.904983 kernel: printk: console [ttyAMA0] enabled Apr 21 09:57:31.904991 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 09:57:31.905151 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 09:57:31.905282 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 21 09:57:31.905358 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 21 09:57:31.905472 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 21 09:57:31.905550 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 21 09:57:31.905561 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 21 09:57:31.905569 kernel: PCI host bridge to bus 0000:00 Apr 21 09:57:31.905656 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 21 09:57:31.905725 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 21 09:57:31.905786 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 21 09:57:31.905846 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 09:57:31.905930 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 21 09:57:31.906009 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 21 09:57:31.906080 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 21 09:57:31.906152 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 21 09:57:31.906261 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:31.906334 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 21 09:57:31.906413 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:31.906869 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 21 09:57:31.906953 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:31.907023 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 21 09:57:31.907105 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:31.907177 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 21 09:57:31.907280 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:31.907352 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 21 09:57:31.907459 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:31.907537 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 21 09:57:31.907619 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:31.907689 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 21 09:57:31.907764 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:31.907833 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 21 09:57:31.907910 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 21 09:57:31.907979 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 21 09:57:31.908063 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 21 09:57:31.908135 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 21 09:57:31.908277 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 09:57:31.908369 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 21 09:57:31.908504 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 21 09:57:31.908580 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 09:57:31.908657 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 21 09:57:31.908733 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 21 09:57:31.908809 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 21 09:57:31.910551 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 21 09:57:31.910640 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 21 09:57:31.910725 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 21 09:57:31.910798 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 21 09:57:31.910883 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 21 09:57:31.910956 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 21 09:57:31.911027 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 21 09:57:31.911104 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 21 09:57:31.911175 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 21 09:57:31.911269 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 21 09:57:31.911356 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 09:57:31.911452 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 21 09:57:31.911528 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 21 09:57:31.911599 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 09:57:31.911672 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 21 09:57:31.911762 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 21 09:57:31.912576 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 21 09:57:31.912667 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 21 09:57:31.912737 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 21 09:57:31.912807 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 21 09:57:31.912881 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 21 09:57:31.912951 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 21 09:57:31.913020 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 21 09:57:31.913092 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 21 09:57:31.913163 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 21 09:57:31.913285 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 21 09:57:31.913369 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 21 09:57:31.914477 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 21 09:57:31.914577 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 21 09:57:31.914651 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 21 09:57:31.914720 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 21 09:57:31.914787 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 21 09:57:31.914866 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 21 09:57:31.914935 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 21 09:57:31.915002 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 21 09:57:31.915073 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 21 09:57:31.915142 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 21 09:57:31.915257 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 21 09:57:31.915338 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 21 09:57:31.915414 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 21 09:57:31.915608 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 21 09:57:31.915678 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 21 09:57:31.915744 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 21 09:57:31.915811 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 21 09:57:31.915878 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 21 09:57:31.915946 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 21 09:57:31.916019 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 21 09:57:31.916089 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 21 09:57:31.916156 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 21 09:57:31.916244 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 21 09:57:31.916317 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 21 09:57:31.916387 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 21 09:57:31.917308 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 21 09:57:31.917405 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 21 09:57:31.917554 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 21 09:57:31.917629 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 21 09:57:31.917699 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 21 09:57:31.917766 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 21 09:57:31.917833 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 21 09:57:31.917907 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 21 09:57:31.917980 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 21 09:57:31.918048 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 21 09:57:31.918114 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 21 09:57:31.918181 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 21 09:57:31.918266 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 21 09:57:31.918336 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 21 09:57:31.918405 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 21 09:57:31.918490 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 21 09:57:31.918564 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 21 09:57:31.918632 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 21 09:57:31.918698 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 21 09:57:31.918767 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 21 09:57:31.918834 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 21 09:57:31.918938 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 21 09:57:31.919008 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 21 09:57:31.919076 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 21 09:57:31.919148 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 21 09:57:31.919235 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 21 09:57:31.919311 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 21 09:57:31.919386 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 21 09:57:31.919492 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 21 09:57:31.919569 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 21 09:57:31.919641 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 21 09:57:31.919709 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 09:57:31.919784 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 21 09:57:31.919852 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 21 09:57:31.919920 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 21 09:57:31.919995 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 21 09:57:31.920067 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 09:57:31.920137 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 21 09:57:31.920251 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 21 09:57:31.920329 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 21 09:57:31.920406 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 21 09:57:31.920568 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 21 09:57:31.920642 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 09:57:31.920734 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 21 09:57:31.920811 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 21 09:57:31.920882 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 21 09:57:31.920958 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 21 09:57:31.921027 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 09:57:31.921094 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 21 09:57:31.921161 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 21 09:57:31.921245 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 21 09:57:31.921325 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 21 09:57:31.921399 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 21 09:57:31.921567 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 09:57:31.921641 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 21 09:57:31.921706 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 21 09:57:31.921771 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 21 09:57:31.921846 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 21 09:57:31.921916 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 21 09:57:31.921983 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 09:57:31.922055 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 21 09:57:31.922145 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 21 09:57:31.922231 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 21 09:57:31.922311 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 21 09:57:31.922380 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 21 09:57:31.922519 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 21 09:57:31.922591 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 09:57:31.922658 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 21 09:57:31.922729 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 21 09:57:31.922795 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 21 09:57:31.922863 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 09:57:31.922930 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 21 09:57:31.923000 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 21 09:57:31.923067 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 21 09:57:31.923138 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 09:57:31.923247 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 21 09:57:31.923505 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 21 09:57:31.923588 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 21 09:57:31.923658 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 21 09:57:31.923720 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 21 09:57:31.923780 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 21 09:57:31.923858 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 21 09:57:31.923920 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 21 09:57:31.923988 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 21 09:57:31.924056 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 21 09:57:31.924118 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 21 09:57:31.924178 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 21 09:57:31.924273 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 21 09:57:31.924339 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 21 09:57:31.924406 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 21 09:57:31.924517 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 21 09:57:31.924587 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 21 09:57:31.924669 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 21 09:57:31.924741 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 21 09:57:31.924803 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 21 09:57:31.924865 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 21 09:57:31.924938 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 21 09:57:31.925011 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 21 09:57:31.925078 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 21 09:57:31.925152 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 21 09:57:31.925264 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 21 09:57:31.925337 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 21 09:57:31.925409 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 21 09:57:31.925557 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 21 09:57:31.925623 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 21 09:57:31.925691 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 21 09:57:31.925753 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 21 09:57:31.925820 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 21 09:57:31.925830 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 21 09:57:31.925839 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 21 09:57:31.925847 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 21 09:57:31.925855 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 21 09:57:31.925863 kernel: iommu: Default domain type: Translated Apr 21 09:57:31.925871 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 21 09:57:31.925879 kernel: efivars: Registered efivars operations Apr 21 09:57:31.925889 kernel: vgaarb: loaded Apr 21 09:57:31.925897 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 21 09:57:31.925905 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 09:57:31.925913 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 09:57:31.925921 kernel: pnp: PnP ACPI init Apr 21 09:57:31.926002 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 21 09:57:31.926014 kernel: pnp: PnP ACPI: found 1 devices Apr 21 09:57:31.926022 kernel: NET: Registered PF_INET protocol family Apr 21 09:57:31.926030 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 09:57:31.926040 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 09:57:31.926048 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 09:57:31.926057 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 09:57:31.926065 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 09:57:31.926073 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 09:57:31.926081 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 09:57:31.926089 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 09:57:31.926097 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 09:57:31.926175 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 21 09:57:31.926202 kernel: PCI: CLS 0 bytes, default 64 Apr 21 09:57:31.926211 kernel: kvm [1]: HYP mode not available Apr 21 09:57:31.926219 kernel: Initialise system trusted keyrings Apr 21 09:57:31.926227 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 09:57:31.926235 kernel: Key type asymmetric registered Apr 21 09:57:31.926242 kernel: Asymmetric key parser 'x509' registered Apr 21 09:57:31.926250 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 09:57:31.926258 kernel: io scheduler mq-deadline registered Apr 21 09:57:31.926266 kernel: io scheduler kyber registered Apr 21 09:57:31.926276 kernel: io scheduler bfq registered Apr 21 09:57:31.926285 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 21 09:57:31.926367 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 21 09:57:31.926454 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 21 09:57:31.926542 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:31.926616 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 21 09:57:31.926690 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 21 09:57:31.926756 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:31.926825 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 21 09:57:31.926894 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 21 09:57:31.926962 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:31.927030 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 21 09:57:31.927101 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 21 09:57:31.927169 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:31.927283 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 21 09:57:31.927357 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 21 09:57:31.927466 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:31.927547 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 21 09:57:31.927622 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 21 09:57:31.927689 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:31.927758 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 21 09:57:31.927827 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 21 09:57:31.927893 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:31.927962 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 21 09:57:31.928034 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 21 09:57:31.928100 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:31.928111 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 21 09:57:31.928177 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 21 09:57:31.928268 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 21 09:57:31.928338 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 21 09:57:31.928349 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 21 09:57:31.928360 kernel: ACPI: button: Power Button [PWRB] Apr 21 09:57:31.928369 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 21 09:57:31.928492 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 21 09:57:31.928576 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 21 09:57:31.928587 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 09:57:31.928596 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 21 09:57:31.928665 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 21 09:57:31.928676 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 21 09:57:31.928684 kernel: thunder_xcv, ver 1.0 Apr 21 09:57:31.928696 kernel: thunder_bgx, ver 1.0 Apr 21 09:57:31.928704 kernel: nicpf, ver 1.0 Apr 21 09:57:31.928711 kernel: nicvf, ver 1.0 Apr 21 09:57:31.928793 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 21 09:57:31.928858 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-21T09:57:31 UTC (1776765451) Apr 21 09:57:31.928868 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 09:57:31.928876 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 21 09:57:31.928884 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 21 09:57:31.928895 kernel: watchdog: Hard watchdog permanently disabled Apr 21 09:57:31.928903 kernel: NET: Registered PF_INET6 protocol family Apr 21 09:57:31.928911 kernel: Segment Routing with IPv6 Apr 21 09:57:31.928919 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 09:57:31.928926 kernel: NET: Registered PF_PACKET protocol family Apr 21 09:57:31.928934 kernel: Key type dns_resolver registered Apr 21 09:57:31.928942 kernel: registered taskstats version 1 Apr 21 09:57:31.928950 kernel: Loading compiled-in X.509 certificates Apr 21 09:57:31.928958 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 3383becb6d31527ac15d01269e47e8fdf1030cd4' Apr 21 09:57:31.928968 kernel: Key type .fscrypt registered Apr 21 09:57:31.928975 kernel: Key type fscrypt-provisioning registered Apr 21 09:57:31.928983 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 09:57:31.928991 kernel: ima: Allocated hash algorithm: sha1 Apr 21 09:57:31.928999 kernel: ima: No architecture policies found Apr 21 09:57:31.929007 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 21 09:57:31.929015 kernel: clk: Disabling unused clocks Apr 21 09:57:31.929023 kernel: Freeing unused kernel memory: 39424K Apr 21 09:57:31.929031 kernel: Run /init as init process Apr 21 09:57:31.929040 kernel: with arguments: Apr 21 09:57:31.929049 kernel: /init Apr 21 09:57:31.929056 kernel: with environment: Apr 21 09:57:31.929474 kernel: HOME=/ Apr 21 09:57:31.929489 kernel: TERM=linux Apr 21 09:57:31.929499 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 09:57:31.929510 systemd[1]: Detected virtualization kvm. Apr 21 09:57:31.929518 systemd[1]: Detected architecture arm64. Apr 21 09:57:31.929532 systemd[1]: Running in initrd. Apr 21 09:57:31.929540 systemd[1]: No hostname configured, using default hostname. Apr 21 09:57:31.929548 systemd[1]: Hostname set to . Apr 21 09:57:31.929557 systemd[1]: Initializing machine ID from VM UUID. Apr 21 09:57:31.929565 systemd[1]: Queued start job for default target initrd.target. Apr 21 09:57:31.929574 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 09:57:31.929582 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 09:57:31.929592 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 09:57:31.929602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 09:57:31.929612 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 09:57:31.929621 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 09:57:31.929631 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 09:57:31.929640 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 09:57:31.929648 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 09:57:31.929657 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 09:57:31.929666 systemd[1]: Reached target paths.target - Path Units. Apr 21 09:57:31.929674 systemd[1]: Reached target slices.target - Slice Units. Apr 21 09:57:31.929683 systemd[1]: Reached target swap.target - Swaps. Apr 21 09:57:31.929691 systemd[1]: Reached target timers.target - Timer Units. Apr 21 09:57:31.929699 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 09:57:31.929707 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 09:57:31.929715 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 09:57:31.929723 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 09:57:31.929733 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 09:57:31.929742 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 09:57:31.929750 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 09:57:31.929758 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 09:57:31.929766 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 09:57:31.929775 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 09:57:31.929783 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 09:57:31.929791 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 09:57:31.929799 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 09:57:31.929810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 09:57:31.929818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:31.929827 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 09:57:31.929835 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 09:57:31.929844 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 09:57:31.929879 systemd-journald[236]: Collecting audit messages is disabled. Apr 21 09:57:31.929903 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 09:57:31.929912 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 09:57:31.929922 kernel: Bridge firewalling registered Apr 21 09:57:31.929930 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 09:57:31.929939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:31.929947 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 09:57:31.929957 systemd-journald[236]: Journal started Apr 21 09:57:31.929976 systemd-journald[236]: Runtime Journal (/run/log/journal/e354d97108324738b09c8ebbb2b80e72) is 8.0M, max 76.6M, 68.6M free. Apr 21 09:57:31.893555 systemd-modules-load[237]: Inserted module 'overlay' Apr 21 09:57:31.931556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 09:57:31.915459 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 21 09:57:31.934462 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 09:57:31.936453 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 09:57:31.944629 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 09:57:31.948742 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 09:57:31.956400 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:57:31.965481 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:31.966465 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 09:57:31.975731 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 09:57:31.976518 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 09:57:31.982636 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 09:57:31.991408 dracut-cmdline[272]: dracut-dracut-053 Apr 21 09:57:31.996345 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=406dfa58472aa4d4545d9757071aae8c3923de73d7e3cb8f6327066fa2449407 Apr 21 09:57:32.020887 systemd-resolved[274]: Positive Trust Anchors: Apr 21 09:57:32.020901 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 09:57:32.020932 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 09:57:32.026804 systemd-resolved[274]: Defaulting to hostname 'linux'. Apr 21 09:57:32.027970 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 09:57:32.030386 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 09:57:32.085478 kernel: SCSI subsystem initialized Apr 21 09:57:32.090480 kernel: Loading iSCSI transport class v2.0-870. Apr 21 09:57:32.098768 kernel: iscsi: registered transport (tcp) Apr 21 09:57:32.112468 kernel: iscsi: registered transport (qla4xxx) Apr 21 09:57:32.112561 kernel: QLogic iSCSI HBA Driver Apr 21 09:57:32.161904 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 09:57:32.167618 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 09:57:32.193598 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 09:57:32.193675 kernel: device-mapper: uevent: version 1.0.3 Apr 21 09:57:32.193687 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 09:57:32.247491 kernel: raid6: neonx8 gen() 15571 MB/s Apr 21 09:57:32.264504 kernel: raid6: neonx4 gen() 15508 MB/s Apr 21 09:57:32.281508 kernel: raid6: neonx2 gen() 13062 MB/s Apr 21 09:57:32.298495 kernel: raid6: neonx1 gen() 10394 MB/s Apr 21 09:57:32.315503 kernel: raid6: int64x8 gen() 6880 MB/s Apr 21 09:57:32.332478 kernel: raid6: int64x4 gen() 7258 MB/s Apr 21 09:57:32.349476 kernel: raid6: int64x2 gen() 6051 MB/s Apr 21 09:57:32.366506 kernel: raid6: int64x1 gen() 5008 MB/s Apr 21 09:57:32.366596 kernel: raid6: using algorithm neonx8 gen() 15571 MB/s Apr 21 09:57:32.383504 kernel: raid6: .... xor() 11816 MB/s, rmw enabled Apr 21 09:57:32.383582 kernel: raid6: using neon recovery algorithm Apr 21 09:57:32.388564 kernel: xor: measuring software checksum speed Apr 21 09:57:32.388632 kernel: 8regs : 19812 MB/sec Apr 21 09:57:32.389861 kernel: 32regs : 19660 MB/sec Apr 21 09:57:32.389894 kernel: arm64_neon : 26963 MB/sec Apr 21 09:57:32.389914 kernel: xor: using function: arm64_neon (26963 MB/sec) Apr 21 09:57:32.442481 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 09:57:32.458465 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 09:57:32.464657 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 09:57:32.488244 systemd-udevd[456]: Using default interface naming scheme 'v255'. Apr 21 09:57:32.494111 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 09:57:32.503718 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 09:57:32.523414 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Apr 21 09:57:32.562011 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 09:57:32.566680 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 09:57:32.636036 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 09:57:32.645012 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 09:57:32.666487 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 09:57:32.668999 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 09:57:32.669832 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 09:57:32.672847 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 09:57:32.682728 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 09:57:32.700416 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 09:57:32.731527 kernel: scsi host0: Virtio SCSI HBA Apr 21 09:57:32.737020 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 09:57:32.737122 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 09:57:32.774546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 09:57:32.776608 kernel: ACPI: bus type USB registered Apr 21 09:57:32.776635 kernel: usbcore: registered new interface driver usbfs Apr 21 09:57:32.775318 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:32.778611 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 09:57:32.780508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 09:57:32.786472 kernel: usbcore: registered new interface driver hub Apr 21 09:57:32.786504 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 21 09:57:32.786692 kernel: usbcore: registered new device driver usb Apr 21 09:57:32.780685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:32.784112 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:32.793458 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 21 09:57:32.793688 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 09:57:32.793972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:32.796440 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 21 09:57:32.817085 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:32.823753 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 09:57:32.825091 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 21 09:57:32.825310 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 21 09:57:32.828454 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 21 09:57:32.828692 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 21 09:57:32.828782 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 09:57:32.833555 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 09:57:32.833621 kernel: GPT:17805311 != 80003071 Apr 21 09:57:32.833632 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 09:57:32.835290 kernel: GPT:17805311 != 80003071 Apr 21 09:57:32.835401 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 09:57:32.835457 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:32.838462 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 21 09:57:32.847442 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 09:57:32.847673 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 21 09:57:32.847763 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 21 09:57:32.850561 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 09:57:32.850781 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 21 09:57:32.851613 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 21 09:57:32.852647 kernel: hub 1-0:1.0: USB hub found Apr 21 09:57:32.853498 kernel: hub 1-0:1.0: 4 ports detected Apr 21 09:57:32.858433 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 21 09:57:32.858625 kernel: hub 2-0:1.0: USB hub found Apr 21 09:57:32.858735 kernel: hub 2-0:1.0: 4 ports detected Apr 21 09:57:32.864788 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:32.886689 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (521) Apr 21 09:57:32.899470 kernel: BTRFS: device fsid be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (519) Apr 21 09:57:32.905038 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 09:57:32.915547 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 09:57:32.922131 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 09:57:32.927593 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 09:57:32.929588 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 09:57:32.936649 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 09:57:32.944639 disk-uuid[577]: Primary Header is updated. Apr 21 09:57:32.944639 disk-uuid[577]: Secondary Entries is updated. Apr 21 09:57:32.944639 disk-uuid[577]: Secondary Header is updated. Apr 21 09:57:32.953084 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:32.956448 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:32.961531 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:33.095588 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 21 09:57:33.233475 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 21 09:57:33.233576 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 21 09:57:33.233914 kernel: usbcore: registered new interface driver usbhid Apr 21 09:57:33.233939 kernel: usbhid: USB HID core driver Apr 21 09:57:33.339627 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 21 09:57:33.470482 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 21 09:57:33.525648 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 21 09:57:33.965289 disk-uuid[578]: The operation has completed successfully. Apr 21 09:57:33.966489 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 09:57:34.023582 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 09:57:34.023693 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 09:57:34.039708 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 09:57:34.058763 sh[595]: Success Apr 21 09:57:34.071443 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 21 09:57:34.119929 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 09:57:34.129849 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 09:57:34.131939 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 09:57:34.159681 kernel: BTRFS info (device dm-0): first mount of filesystem be2a029c-0ccf-4981-91f9-c6e4b4ef2fb8 Apr 21 09:57:34.159744 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:34.159755 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 09:57:34.159766 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 09:57:34.160434 kernel: BTRFS info (device dm-0): using free space tree Apr 21 09:57:34.166457 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 09:57:34.168465 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 09:57:34.170767 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 09:57:34.186841 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 09:57:34.191585 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 09:57:34.204367 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:34.204435 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:34.204453 kernel: BTRFS info (device sda6): using free space tree Apr 21 09:57:34.209794 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 09:57:34.209856 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 09:57:34.221447 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:34.221786 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 09:57:34.229244 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 09:57:34.236766 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 09:57:34.323222 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 09:57:34.335747 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 09:57:34.348201 ignition[690]: Ignition 2.19.0 Apr 21 09:57:34.348214 ignition[690]: Stage: fetch-offline Apr 21 09:57:34.348268 ignition[690]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:34.348277 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:34.348467 ignition[690]: parsed url from cmdline: "" Apr 21 09:57:34.352121 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 09:57:34.348470 ignition[690]: no config URL provided Apr 21 09:57:34.348474 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 09:57:34.348482 ignition[690]: no config at "/usr/lib/ignition/user.ign" Apr 21 09:57:34.348493 ignition[690]: failed to fetch config: resource requires networking Apr 21 09:57:34.348705 ignition[690]: Ignition finished successfully Apr 21 09:57:34.359774 systemd-networkd[783]: lo: Link UP Apr 21 09:57:34.359784 systemd-networkd[783]: lo: Gained carrier Apr 21 09:57:34.361602 systemd-networkd[783]: Enumeration completed Apr 21 09:57:34.361732 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 09:57:34.362354 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:34.362358 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:34.363730 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:34.363733 systemd-networkd[783]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:34.364319 systemd-networkd[783]: eth0: Link UP Apr 21 09:57:34.364322 systemd-networkd[783]: eth0: Gained carrier Apr 21 09:57:34.364329 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:34.365457 systemd[1]: Reached target network.target - Network. Apr 21 09:57:34.370835 systemd-networkd[783]: eth1: Link UP Apr 21 09:57:34.370838 systemd-networkd[783]: eth1: Gained carrier Apr 21 09:57:34.370849 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:34.372677 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 09:57:34.388342 ignition[788]: Ignition 2.19.0 Apr 21 09:57:34.388354 ignition[788]: Stage: fetch Apr 21 09:57:34.388551 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:34.388561 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:34.388650 ignition[788]: parsed url from cmdline: "" Apr 21 09:57:34.388654 ignition[788]: no config URL provided Apr 21 09:57:34.388659 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 09:57:34.388666 ignition[788]: no config at "/usr/lib/ignition/user.ign" Apr 21 09:57:34.388686 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 21 09:57:34.389488 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 09:57:34.412582 systemd-networkd[783]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 09:57:34.436533 systemd-networkd[783]: eth0: DHCPv4 address 178.104.214.66/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 09:57:34.590604 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 21 09:57:34.597532 ignition[788]: GET result: OK Apr 21 09:57:34.597690 ignition[788]: parsing config with SHA512: 3178829b952eec902734ab456b45738ecee91194f4d2d3f9bd9b7e170516bd84fa1340dbf85eb8645d2514e1993227bf877ad60f4505a26a9f3f7b1681b806cf Apr 21 09:57:34.603353 unknown[788]: fetched base config from "system" Apr 21 09:57:34.603363 unknown[788]: fetched base config from "system" Apr 21 09:57:34.603749 ignition[788]: fetch: fetch complete Apr 21 09:57:34.603368 unknown[788]: fetched user config from "hetzner" Apr 21 09:57:34.603754 ignition[788]: fetch: fetch passed Apr 21 09:57:34.607584 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 09:57:34.603804 ignition[788]: Ignition finished successfully Apr 21 09:57:34.614691 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 09:57:34.631749 ignition[797]: Ignition 2.19.0 Apr 21 09:57:34.631761 ignition[797]: Stage: kargs Apr 21 09:57:34.631947 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:34.631957 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:34.632921 ignition[797]: kargs: kargs passed Apr 21 09:57:34.632970 ignition[797]: Ignition finished successfully Apr 21 09:57:34.635959 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 09:57:34.645809 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 09:57:34.660023 ignition[803]: Ignition 2.19.0 Apr 21 09:57:34.660037 ignition[803]: Stage: disks Apr 21 09:57:34.660246 ignition[803]: no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:34.660257 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:34.661196 ignition[803]: disks: disks passed Apr 21 09:57:34.663458 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 09:57:34.661248 ignition[803]: Ignition finished successfully Apr 21 09:57:34.665995 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 09:57:34.667153 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 09:57:34.668876 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 09:57:34.669993 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 09:57:34.671262 systemd[1]: Reached target basic.target - Basic System. Apr 21 09:57:34.679670 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 09:57:34.699069 systemd-fsck[811]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 21 09:57:34.703979 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 09:57:34.711622 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 09:57:34.765470 kernel: EXT4-fs (sda9): mounted filesystem 97544627-6598-4a50-85bf-78c13463f4bd r/w with ordered data mode. Quota mode: none. Apr 21 09:57:34.767007 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 09:57:34.768315 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 09:57:34.781676 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 09:57:34.787252 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 09:57:34.794443 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (819) Apr 21 09:57:34.794683 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 09:57:34.795320 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 09:57:34.802317 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:34.802343 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:34.802355 kernel: BTRFS info (device sda6): using free space tree Apr 21 09:57:34.795353 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 09:57:34.803664 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 09:57:34.813692 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 09:57:34.819672 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 09:57:34.819754 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 09:57:34.827374 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 09:57:34.862127 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 09:57:34.866671 coreos-metadata[821]: Apr 21 09:57:34.866 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 21 09:57:34.869097 coreos-metadata[821]: Apr 21 09:57:34.867 INFO Fetch successful Apr 21 09:57:34.869097 coreos-metadata[821]: Apr 21 09:57:34.867 INFO wrote hostname ci-4081-3-7-a-ee081c135b to /sysroot/etc/hostname Apr 21 09:57:34.871513 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Apr 21 09:57:34.872898 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 09:57:34.878833 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 09:57:34.883508 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 09:57:34.986457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 09:57:34.993684 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 09:57:35.007479 kernel: BTRFS info (device sda6): last unmount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:35.008790 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 09:57:35.037552 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 09:57:35.041470 ignition[936]: INFO : Ignition 2.19.0 Apr 21 09:57:35.041470 ignition[936]: INFO : Stage: mount Apr 21 09:57:35.041470 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:35.041470 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:35.044625 ignition[936]: INFO : mount: mount passed Apr 21 09:57:35.046075 ignition[936]: INFO : Ignition finished successfully Apr 21 09:57:35.047202 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 09:57:35.052585 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 09:57:35.159795 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 09:57:35.169769 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 09:57:35.177469 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Apr 21 09:57:35.179738 kernel: BTRFS info (device sda6): first mount of filesystem 271cc9ce-9bef-4147-844b-0996375babde Apr 21 09:57:35.179801 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 21 09:57:35.179829 kernel: BTRFS info (device sda6): using free space tree Apr 21 09:57:35.182467 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 09:57:35.182525 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 09:57:35.185986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 09:57:35.206450 ignition[966]: INFO : Ignition 2.19.0 Apr 21 09:57:35.206450 ignition[966]: INFO : Stage: files Apr 21 09:57:35.208485 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:35.208485 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:35.211213 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Apr 21 09:57:35.211213 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 09:57:35.211213 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 09:57:35.215935 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 09:57:35.215935 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 09:57:35.215935 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 09:57:35.215786 unknown[966]: wrote ssh authorized keys file for user: core Apr 21 09:57:35.220958 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 09:57:35.222255 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 21 09:57:35.745697 systemd-networkd[783]: eth0: Gained IPv6LL Apr 21 09:57:35.930510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 09:57:36.321617 systemd-networkd[783]: eth1: Gained IPv6LL Apr 21 09:57:40.950505 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 21 09:57:40.955082 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 09:57:40.955082 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 09:57:40.955082 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 09:57:40.955082 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 09:57:40.955082 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 09:57:40.960535 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 09:57:40.960535 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 09:57:40.960535 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 09:57:40.960535 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 09:57:40.960535 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 09:57:40.960535 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 21 09:57:40.960535 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 21 09:57:40.960535 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 21 09:57:40.960535 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 21 09:57:41.421456 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 09:57:42.711880 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 21 09:57:42.711880 ignition[966]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 09:57:42.716189 ignition[966]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 09:57:42.716189 ignition[966]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 09:57:42.716189 ignition[966]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 09:57:42.716189 ignition[966]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 09:57:42.716189 ignition[966]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 09:57:42.716189 ignition[966]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 09:57:42.716189 ignition[966]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 09:57:42.716189 ignition[966]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 21 09:57:42.716189 ignition[966]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 09:57:42.716189 ignition[966]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 09:57:42.716189 ignition[966]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 09:57:42.716189 ignition[966]: INFO : files: files passed Apr 21 09:57:42.716189 ignition[966]: INFO : Ignition finished successfully Apr 21 09:57:42.717676 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 09:57:42.728474 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 09:57:42.731817 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 09:57:42.735718 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 09:57:42.737536 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 09:57:42.755324 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 09:57:42.755324 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 09:57:42.759319 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 09:57:42.761714 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 09:57:42.764129 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 09:57:42.769733 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 09:57:42.809182 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 09:57:42.809351 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 09:57:42.811613 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 09:57:42.812647 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 09:57:42.813675 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 09:57:42.816699 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 09:57:42.835499 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 09:57:42.841611 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 09:57:42.852561 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 09:57:42.853975 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 09:57:42.854759 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 09:57:42.856976 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 09:57:42.857096 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 09:57:42.859234 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 09:57:42.860564 systemd[1]: Stopped target basic.target - Basic System. Apr 21 09:57:42.861571 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 09:57:42.862714 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 09:57:42.864054 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 09:57:42.865399 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 09:57:42.866547 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 09:57:42.867903 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 09:57:42.869086 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 09:57:42.870186 systemd[1]: Stopped target swap.target - Swaps. Apr 21 09:57:42.871135 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 09:57:42.871277 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 09:57:42.872759 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 09:57:42.874117 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 09:57:42.875283 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 09:57:42.879503 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 09:57:42.880252 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 09:57:42.880370 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 09:57:42.882489 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 09:57:42.882602 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 09:57:42.885695 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 09:57:42.885794 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 09:57:42.887820 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 09:57:42.887917 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 09:57:42.901929 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 09:57:42.906802 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 09:57:42.910673 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 09:57:42.910974 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 09:57:42.919677 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 09:57:42.919917 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 09:57:42.935716 ignition[1018]: INFO : Ignition 2.19.0 Apr 21 09:57:42.935716 ignition[1018]: INFO : Stage: umount Apr 21 09:57:42.937533 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 09:57:42.937533 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 09:57:42.937968 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 09:57:42.940259 ignition[1018]: INFO : umount: umount passed Apr 21 09:57:42.940259 ignition[1018]: INFO : Ignition finished successfully Apr 21 09:57:42.941011 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 09:57:42.944941 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 09:57:42.945543 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 09:57:42.945670 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 09:57:42.948196 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 09:57:42.948289 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 09:57:42.949095 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 09:57:42.949149 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 09:57:42.950103 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 09:57:42.950155 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 09:57:42.951077 systemd[1]: Stopped target network.target - Network. Apr 21 09:57:42.951990 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 09:57:42.952045 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 09:57:42.953182 systemd[1]: Stopped target paths.target - Path Units. Apr 21 09:57:42.954112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 09:57:42.957565 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 09:57:42.958346 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 09:57:42.959767 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 09:57:42.960827 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 09:57:42.960876 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 09:57:42.961924 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 09:57:42.961961 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 09:57:42.963071 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 09:57:42.963126 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 09:57:42.964191 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 09:57:42.964239 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 09:57:42.965534 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 09:57:42.966540 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 09:57:42.968330 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 09:57:42.968441 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 09:57:42.970195 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 09:57:42.970264 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 09:57:42.970280 systemd-networkd[783]: eth1: DHCPv6 lease lost Apr 21 09:57:42.973755 systemd-networkd[783]: eth0: DHCPv6 lease lost Apr 21 09:57:42.976620 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 09:57:42.976830 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 09:57:42.978713 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 09:57:42.978837 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 09:57:42.983324 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 09:57:42.983386 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 09:57:42.988622 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 09:57:42.989450 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 09:57:42.989599 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 09:57:42.992069 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 09:57:42.992121 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:57:42.993836 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 09:57:42.993895 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 09:57:42.995566 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 09:57:42.995626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 09:57:43.000061 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 09:57:43.014718 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 09:57:43.016479 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 09:57:43.019382 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 09:57:43.019547 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 09:57:43.021597 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 09:57:43.021629 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 09:57:43.022708 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 09:57:43.022755 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 09:57:43.024492 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 09:57:43.024540 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 09:57:43.026016 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 09:57:43.026059 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 09:57:43.039267 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 09:57:43.041288 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 09:57:43.041417 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 09:57:43.043157 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 09:57:43.043242 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 09:57:43.047546 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 09:57:43.047593 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 09:57:43.048711 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 09:57:43.048749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:43.052747 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 09:57:43.052864 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 09:57:43.053729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 09:57:43.054511 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 09:57:43.055969 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 09:57:43.064605 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 09:57:43.072780 systemd[1]: Switching root. Apr 21 09:57:43.110518 systemd-journald[236]: Journal stopped Apr 21 09:57:44.001497 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Apr 21 09:57:44.001576 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 09:57:44.001593 kernel: SELinux: policy capability open_perms=1 Apr 21 09:57:44.001602 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 09:57:44.001612 kernel: SELinux: policy capability always_check_network=0 Apr 21 09:57:44.001626 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 09:57:44.001640 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 09:57:44.001649 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 09:57:44.001668 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 09:57:44.001678 systemd[1]: Successfully loaded SELinux policy in 34.459ms. Apr 21 09:57:44.001699 kernel: audit: type=1403 audit(1776765463.273:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 09:57:44.001712 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.357ms. Apr 21 09:57:44.001726 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 09:57:44.001737 systemd[1]: Detected virtualization kvm. Apr 21 09:57:44.001748 systemd[1]: Detected architecture arm64. Apr 21 09:57:44.001758 systemd[1]: Detected first boot. Apr 21 09:57:44.001768 systemd[1]: Hostname set to . Apr 21 09:57:44.001778 systemd[1]: Initializing machine ID from VM UUID. Apr 21 09:57:44.001789 zram_generator::config[1060]: No configuration found. Apr 21 09:57:44.001802 systemd[1]: Populated /etc with preset unit settings. Apr 21 09:57:44.001812 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 09:57:44.001822 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 09:57:44.001832 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 09:57:44.001843 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 09:57:44.001871 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 09:57:44.001887 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 09:57:44.001899 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 09:57:44.001912 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 09:57:44.001923 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 09:57:44.001934 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 09:57:44.001944 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 09:57:44.001954 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 09:57:44.001964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 09:57:44.001976 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 09:57:44.001986 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 09:57:44.001996 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 09:57:44.002008 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 09:57:44.002022 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 21 09:57:44.002034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 09:57:44.002055 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 09:57:44.002076 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 09:57:44.002093 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 09:57:44.002108 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 09:57:44.002119 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 09:57:44.002129 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 09:57:44.002152 systemd[1]: Reached target slices.target - Slice Units. Apr 21 09:57:44.002167 systemd[1]: Reached target swap.target - Swaps. Apr 21 09:57:44.002181 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 09:57:44.002192 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 09:57:44.002202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 09:57:44.002213 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 09:57:44.002226 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 09:57:44.002237 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 09:57:44.002247 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 09:57:44.002259 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 09:57:44.002270 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 09:57:44.002280 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 09:57:44.002290 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 09:57:44.002300 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 09:57:44.002311 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 09:57:44.002323 systemd[1]: Reached target machines.target - Containers. Apr 21 09:57:44.002339 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 09:57:44.002353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:44.002367 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 09:57:44.002380 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 09:57:44.002392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:44.002402 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 09:57:44.002413 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 09:57:44.002438 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 09:57:44.002450 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 09:57:44.002461 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 09:57:44.002472 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 09:57:44.002483 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 09:57:44.002493 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 09:57:44.002505 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 09:57:44.002517 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 09:57:44.002528 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 09:57:44.002543 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 09:57:44.002554 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 09:57:44.002565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 09:57:44.002605 systemd-journald[1130]: Collecting audit messages is disabled. Apr 21 09:57:44.002636 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 09:57:44.002648 systemd[1]: Stopped verity-setup.service. Apr 21 09:57:44.002660 systemd-journald[1130]: Journal started Apr 21 09:57:44.002682 systemd-journald[1130]: Runtime Journal (/run/log/journal/e354d97108324738b09c8ebbb2b80e72) is 8.0M, max 76.6M, 68.6M free. Apr 21 09:57:43.755448 systemd[1]: Queued start job for default target multi-user.target. Apr 21 09:57:43.780480 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 09:57:43.781224 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 09:57:44.005449 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 09:57:44.005514 kernel: loop: module loaded Apr 21 09:57:44.007981 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 09:57:44.008788 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 09:57:44.010044 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 09:57:44.011730 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 09:57:44.013614 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 09:57:44.014769 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 09:57:44.017828 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 09:57:44.019034 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 09:57:44.019198 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 09:57:44.020256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:44.020396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:44.023873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 09:57:44.024101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 09:57:44.025297 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 09:57:44.025500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 09:57:44.027634 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 09:57:44.029798 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 09:57:44.039211 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 09:57:44.051195 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 09:57:44.053437 kernel: ACPI: bus type drm_connector registered Apr 21 09:57:44.059586 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 09:57:44.060258 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 09:57:44.060304 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 09:57:44.066953 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 09:57:44.068621 kernel: fuse: init (API version 7.39) Apr 21 09:57:44.074875 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 09:57:44.082664 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 09:57:44.083542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:44.088636 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 09:57:44.093277 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 09:57:44.096580 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 09:57:44.100814 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 09:57:44.101586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 09:57:44.110035 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 09:57:44.116476 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 09:57:44.118620 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 09:57:44.123668 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 09:57:44.124731 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 09:57:44.124876 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 09:57:44.125816 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 09:57:44.125950 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 09:57:44.127381 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 09:57:44.129932 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 09:57:44.140784 systemd-journald[1130]: Time spent on flushing to /var/log/journal/e354d97108324738b09c8ebbb2b80e72 is 68.824ms for 1124 entries. Apr 21 09:57:44.140784 systemd-journald[1130]: System Journal (/var/log/journal/e354d97108324738b09c8ebbb2b80e72) is 8.0M, max 584.8M, 576.8M free. Apr 21 09:57:44.231827 systemd-journald[1130]: Received client request to flush runtime journal. Apr 21 09:57:44.231884 kernel: loop0: detected capacity change from 0 to 114432 Apr 21 09:57:44.141927 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 09:57:44.147837 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 09:57:44.158602 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 09:57:44.175707 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 09:57:44.185664 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 09:57:44.201092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 09:57:44.217125 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 09:57:44.230098 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 09:57:44.245475 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 09:57:44.234383 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 09:57:44.240442 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 09:57:44.244355 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Apr 21 09:57:44.244368 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Apr 21 09:57:44.249932 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 09:57:44.256828 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 09:57:44.267694 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 09:57:44.274685 kernel: loop1: detected capacity change from 0 to 114328 Apr 21 09:57:44.277035 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 09:57:44.307099 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 09:57:44.315834 kernel: loop2: detected capacity change from 0 to 197488 Apr 21 09:57:44.316556 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 09:57:44.337007 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 21 09:57:44.337125 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 21 09:57:44.343644 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 09:57:44.357226 kernel: loop3: detected capacity change from 0 to 8 Apr 21 09:57:44.386465 kernel: loop4: detected capacity change from 0 to 114432 Apr 21 09:57:44.404458 kernel: loop5: detected capacity change from 0 to 114328 Apr 21 09:57:44.423459 kernel: loop6: detected capacity change from 0 to 197488 Apr 21 09:57:44.445450 kernel: loop7: detected capacity change from 0 to 8 Apr 21 09:57:44.445347 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 21 09:57:44.446255 (sd-merge)[1205]: Merged extensions into '/usr'. Apr 21 09:57:44.453541 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 09:57:44.453558 systemd[1]: Reloading... Apr 21 09:57:44.550581 zram_generator::config[1230]: No configuration found. Apr 21 09:57:44.706625 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 09:57:44.708010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:57:44.755561 systemd[1]: Reloading finished in 301 ms. Apr 21 09:57:44.799292 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 09:57:44.804315 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 09:57:44.813599 systemd[1]: Starting ensure-sysext.service... Apr 21 09:57:44.815369 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 09:57:44.826500 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Apr 21 09:57:44.826639 systemd[1]: Reloading... Apr 21 09:57:44.862020 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 09:57:44.862307 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 09:57:44.862969 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 09:57:44.863226 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Apr 21 09:57:44.863272 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Apr 21 09:57:44.869299 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 09:57:44.869486 systemd-tmpfiles[1269]: Skipping /boot Apr 21 09:57:44.881196 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 09:57:44.881358 systemd-tmpfiles[1269]: Skipping /boot Apr 21 09:57:44.935441 zram_generator::config[1293]: No configuration found. Apr 21 09:57:45.032501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:57:45.079658 systemd[1]: Reloading finished in 252 ms. Apr 21 09:57:45.100582 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 09:57:45.107106 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 09:57:45.125978 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 09:57:45.131904 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 09:57:45.137767 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 09:57:45.147735 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 09:57:45.151705 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 09:57:45.161734 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 09:57:45.166200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:45.173876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:45.178778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 09:57:45.183123 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 09:57:45.184970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:45.189722 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 09:57:45.199315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:45.201529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:45.203215 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 09:57:45.207637 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:45.213783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:45.214703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:45.224725 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 09:57:45.228345 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Apr 21 09:57:45.233224 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:45.238836 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 09:57:45.239588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:45.245500 systemd[1]: Finished ensure-sysext.service. Apr 21 09:57:45.247843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 09:57:45.248013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 09:57:45.250930 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 09:57:45.264626 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 09:57:45.266928 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 09:57:45.269769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:45.269959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:45.273019 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 09:57:45.278876 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 09:57:45.280848 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 09:57:45.283354 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 09:57:45.285169 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 09:57:45.307658 augenrules[1386]: No rules Apr 21 09:57:45.311608 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 09:57:45.312660 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 09:57:45.314493 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 09:57:45.315651 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 09:57:45.316063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 09:57:45.331043 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 09:57:45.349121 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 09:57:45.417539 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 21 09:57:45.491977 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 09:57:45.492903 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 09:57:45.501639 systemd-resolved[1344]: Positive Trust Anchors: Apr 21 09:57:45.501671 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 09:57:45.501705 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 09:57:45.511606 systemd-networkd[1388]: lo: Link UP Apr 21 09:57:45.511618 systemd-networkd[1388]: lo: Gained carrier Apr 21 09:57:45.514612 systemd-networkd[1388]: Enumeration completed Apr 21 09:57:45.514728 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 09:57:45.517868 systemd-resolved[1344]: Using system hostname 'ci-4081-3-7-a-ee081c135b'. Apr 21 09:57:45.524602 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 09:57:45.527595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 09:57:45.528378 systemd[1]: Reached target network.target - Network. Apr 21 09:57:45.529039 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 09:57:45.584459 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 09:57:45.603458 systemd-networkd[1388]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:45.603469 systemd-networkd[1388]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:45.606649 systemd-networkd[1388]: eth1: Link UP Apr 21 09:57:45.606662 systemd-networkd[1388]: eth1: Gained carrier Apr 21 09:57:45.606685 systemd-networkd[1388]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:45.610198 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:45.610209 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 09:57:45.612940 systemd-networkd[1388]: eth0: Link UP Apr 21 09:57:45.612950 systemd-networkd[1388]: eth0: Gained carrier Apr 21 09:57:45.612973 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 09:57:45.635904 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 21 09:57:45.635988 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 21 09:57:45.636006 kernel: [drm] features: -context_init Apr 21 09:57:45.638163 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1374) Apr 21 09:57:45.644463 kernel: [drm] number of scanouts: 1 Apr 21 09:57:45.644552 kernel: [drm] number of cap sets: 0 Apr 21 09:57:45.644598 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 21 09:57:45.649974 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 21 09:57:45.650109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 09:57:45.651830 systemd-networkd[1388]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 09:57:45.654213 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 21 09:57:45.675093 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 09:57:45.674730 systemd-networkd[1388]: eth0: DHCPv4 address 178.104.214.66/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 09:57:45.675568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 09:57:45.676587 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 21 09:57:45.707442 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 21 09:57:45.724089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 09:57:45.728718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 09:57:45.729606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 09:57:45.729655 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 09:57:45.730089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 09:57:45.731507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 09:57:45.732620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 09:57:45.732772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 09:57:45.748369 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 09:57:45.748750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 09:57:45.764043 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 09:57:45.778047 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 09:57:45.780601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 09:57:45.780731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 09:57:45.783664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 09:57:45.793652 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 09:57:45.861592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 09:57:45.882451 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 09:57:45.888768 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 09:57:45.906383 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 09:57:45.933559 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 09:57:45.935011 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 09:57:45.936117 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 09:57:45.937290 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 09:57:45.938291 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 09:57:45.939573 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 09:57:45.940283 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 09:57:45.941179 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 09:57:45.941937 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 09:57:45.941978 systemd[1]: Reached target paths.target - Path Units. Apr 21 09:57:45.942510 systemd[1]: Reached target timers.target - Timer Units. Apr 21 09:57:45.946491 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 09:57:45.948694 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 09:57:45.954753 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 09:57:45.957570 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 09:57:45.959045 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 09:57:45.959985 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 09:57:45.960742 systemd[1]: Reached target basic.target - Basic System. Apr 21 09:57:45.961341 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 09:57:45.961373 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 09:57:45.972243 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 09:57:45.978650 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 09:57:45.981026 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 09:57:45.984859 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 09:57:45.988607 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 09:57:45.994242 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 09:57:45.995939 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 09:57:46.001565 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 09:57:46.002995 jq[1455]: false Apr 21 09:57:46.007730 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 09:57:46.011187 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 21 09:57:46.018583 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 09:57:46.024688 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 09:57:46.032702 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 09:57:46.036244 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 09:57:46.036806 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 09:57:46.038674 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 09:57:46.046405 coreos-metadata[1453]: Apr 21 09:57:46.046 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 21 09:57:46.046312 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 09:57:46.047537 coreos-metadata[1453]: Apr 21 09:57:46.047 INFO Fetch successful Apr 21 09:57:46.048474 coreos-metadata[1453]: Apr 21 09:57:46.048 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 21 09:57:46.049826 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 09:57:46.051708 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 09:57:46.052213 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 09:57:46.059455 coreos-metadata[1453]: Apr 21 09:57:46.055 INFO Fetch successful Apr 21 09:57:46.079786 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 09:57:46.093564 extend-filesystems[1456]: Found loop4 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found loop5 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found loop6 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found loop7 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found sda Apr 21 09:57:46.093564 extend-filesystems[1456]: Found sda1 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found sda2 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found sda3 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found usr Apr 21 09:57:46.093564 extend-filesystems[1456]: Found sda4 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found sda6 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found sda7 Apr 21 09:57:46.093564 extend-filesystems[1456]: Found sda9 Apr 21 09:57:46.093564 extend-filesystems[1456]: Checking size of /dev/sda9 Apr 21 09:57:46.099104 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 09:57:46.106903 dbus-daemon[1454]: [system] SELinux support is enabled Apr 21 09:57:46.169790 extend-filesystems[1456]: Resized partition /dev/sda9 Apr 21 09:57:46.099305 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 09:57:46.177240 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Apr 21 09:57:46.190949 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 21 09:57:46.107076 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 09:57:46.112525 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 09:57:46.191284 tar[1468]: linux-arm64/LICENSE Apr 21 09:57:46.191284 tar[1468]: linux-arm64/helm Apr 21 09:57:46.112558 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 09:57:46.192757 jq[1466]: true Apr 21 09:57:46.123358 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 09:57:46.123383 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 09:57:46.195520 jq[1494]: true Apr 21 09:57:46.140932 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 09:57:46.141213 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 09:57:46.141410 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 09:57:46.204261 update_engine[1465]: I20260421 09:57:46.200176 1465 main.cc:92] Flatcar Update Engine starting Apr 21 09:57:46.215972 systemd[1]: Started update-engine.service - Update Engine. Apr 21 09:57:46.220048 update_engine[1465]: I20260421 09:57:46.217468 1465 update_check_scheduler.cc:74] Next update check in 7m20s Apr 21 09:57:46.230301 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 09:57:46.281520 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 09:57:46.283350 systemd-logind[1464]: New seat seat0. Apr 21 09:57:46.285955 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 09:57:46.299999 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (Power Button) Apr 21 09:57:46.300017 systemd-logind[1464]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 21 09:57:46.300586 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 09:57:46.335597 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1400) Apr 21 09:57:46.353593 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Apr 21 09:57:46.356139 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 09:57:46.374780 systemd[1]: Starting sshkeys.service... Apr 21 09:57:46.400057 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 09:57:46.410211 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 09:57:46.420470 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 21 09:57:46.440077 extend-filesystems[1496]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 09:57:46.440077 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 21 09:57:46.440077 extend-filesystems[1496]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 21 09:57:46.443110 extend-filesystems[1456]: Resized filesystem in /dev/sda9 Apr 21 09:57:46.443110 extend-filesystems[1456]: Found sr0 Apr 21 09:57:46.449737 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 09:57:46.451513 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 09:57:46.460882 coreos-metadata[1532]: Apr 21 09:57:46.460 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 21 09:57:46.467543 coreos-metadata[1532]: Apr 21 09:57:46.467 INFO Fetch successful Apr 21 09:57:46.474532 unknown[1532]: wrote ssh authorized keys file for user: core Apr 21 09:57:46.502843 locksmithd[1508]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 09:57:46.523786 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Apr 21 09:57:46.524713 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 09:57:46.528777 systemd[1]: Finished sshkeys.service. Apr 21 09:57:46.572308 containerd[1491]: time="2026-04-21T09:57:46.572207280Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 09:57:46.637432 containerd[1491]: time="2026-04-21T09:57:46.635115400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:46.640703 containerd[1491]: time="2026-04-21T09:57:46.640655120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:46.640819 containerd[1491]: time="2026-04-21T09:57:46.640806000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 09:57:46.640905 containerd[1491]: time="2026-04-21T09:57:46.640891440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 09:57:46.641181 containerd[1491]: time="2026-04-21T09:57:46.641116760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 09:57:46.641280 containerd[1491]: time="2026-04-21T09:57:46.641263960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:46.641407 containerd[1491]: time="2026-04-21T09:57:46.641388760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:46.641516 containerd[1491]: time="2026-04-21T09:57:46.641496720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:46.641761 containerd[1491]: time="2026-04-21T09:57:46.641738280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:46.641970 containerd[1491]: time="2026-04-21T09:57:46.641955760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:46.643517 containerd[1491]: time="2026-04-21T09:57:46.643495600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:46.644374 containerd[1491]: time="2026-04-21T09:57:46.643635000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:46.644374 containerd[1491]: time="2026-04-21T09:57:46.643746760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:46.644374 containerd[1491]: time="2026-04-21T09:57:46.643990880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 09:57:46.644374 containerd[1491]: time="2026-04-21T09:57:46.644170200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 09:57:46.644374 containerd[1491]: time="2026-04-21T09:57:46.644186800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 09:57:46.644374 containerd[1491]: time="2026-04-21T09:57:46.644277040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 09:57:46.644374 containerd[1491]: time="2026-04-21T09:57:46.644329920Z" level=info msg="metadata content store policy set" policy=shared Apr 21 09:57:46.650912 containerd[1491]: time="2026-04-21T09:57:46.650690200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 09:57:46.650912 containerd[1491]: time="2026-04-21T09:57:46.650755960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 09:57:46.650912 containerd[1491]: time="2026-04-21T09:57:46.650774520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 09:57:46.650912 containerd[1491]: time="2026-04-21T09:57:46.650790480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 09:57:46.650912 containerd[1491]: time="2026-04-21T09:57:46.650804760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 09:57:46.651219 containerd[1491]: time="2026-04-21T09:57:46.651199480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 09:57:46.651590 containerd[1491]: time="2026-04-21T09:57:46.651570520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 09:57:46.651781 containerd[1491]: time="2026-04-21T09:57:46.651763600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 09:57:46.651859 containerd[1491]: time="2026-04-21T09:57:46.651841760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 09:57:46.651941 containerd[1491]: time="2026-04-21T09:57:46.651916920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 09:57:46.651999 containerd[1491]: time="2026-04-21T09:57:46.651987680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 09:57:46.652047 containerd[1491]: time="2026-04-21T09:57:46.652036960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 09:57:46.652098 containerd[1491]: time="2026-04-21T09:57:46.652087040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653450640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653490400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653504720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653516680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653529120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653551800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653565680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653581920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653606240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653718600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653736680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653749880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653761640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654448 containerd[1491]: time="2026-04-21T09:57:46.653777680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.653793840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.653805680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.653817760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.653828920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.653844520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.653872600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.653884600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.653898800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.654037480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.654059200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.654070600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.654087240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 09:57:46.654754 containerd[1491]: time="2026-04-21T09:57:46.654097000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.655049 containerd[1491]: time="2026-04-21T09:57:46.654114760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 09:57:46.655049 containerd[1491]: time="2026-04-21T09:57:46.654135560Z" level=info msg="NRI interface is disabled by configuration." Apr 21 09:57:46.655049 containerd[1491]: time="2026-04-21T09:57:46.654156160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 09:57:46.655412 containerd[1491]: time="2026-04-21T09:57:46.655336800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 09:57:46.658338 containerd[1491]: time="2026-04-21T09:57:46.657463520Z" level=info msg="Connect containerd service" Apr 21 09:57:46.658338 containerd[1491]: time="2026-04-21T09:57:46.657524840Z" level=info msg="using legacy CRI server" Apr 21 09:57:46.658338 containerd[1491]: time="2026-04-21T09:57:46.657533000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 09:57:46.658338 containerd[1491]: time="2026-04-21T09:57:46.657641120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 09:57:46.658727 containerd[1491]: time="2026-04-21T09:57:46.658699320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 09:57:46.659175 containerd[1491]: time="2026-04-21T09:57:46.659053400Z" level=info msg="Start subscribing containerd event" Apr 21 09:57:46.659175 containerd[1491]: time="2026-04-21T09:57:46.659135120Z" level=info msg="Start recovering state" Apr 21 09:57:46.659237 containerd[1491]: time="2026-04-21T09:57:46.659217440Z" level=info msg="Start event monitor" Apr 21 09:57:46.659237 containerd[1491]: time="2026-04-21T09:57:46.659229280Z" level=info msg="Start snapshots syncer" Apr 21 09:57:46.659280 containerd[1491]: time="2026-04-21T09:57:46.659243240Z" level=info msg="Start cni network conf syncer for default" Apr 21 09:57:46.659280 containerd[1491]: time="2026-04-21T09:57:46.659251080Z" level=info msg="Start streaming server" Apr 21 09:57:46.665666 containerd[1491]: time="2026-04-21T09:57:46.664697080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 09:57:46.665666 containerd[1491]: time="2026-04-21T09:57:46.664757120Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 09:57:46.664939 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 09:57:46.668529 containerd[1491]: time="2026-04-21T09:57:46.667390520Z" level=info msg="containerd successfully booted in 0.096918s" Apr 21 09:57:46.907052 tar[1468]: linux-arm64/README.md Apr 21 09:57:46.917675 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 09:57:47.265631 systemd-networkd[1388]: eth1: Gained IPv6LL Apr 21 09:57:47.268760 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 21 09:57:47.270790 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 09:57:47.275317 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 09:57:47.282848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:57:47.292386 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 09:57:47.317872 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 09:57:47.560687 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 09:57:47.590229 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 09:57:47.604151 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 09:57:47.608839 systemd[1]: Started sshd@0-178.104.214.66:22-50.85.169.122:33504.service - OpenSSH per-connection server daemon (50.85.169.122:33504). Apr 21 09:57:47.615595 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 09:57:47.617481 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 09:57:47.628978 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 09:57:47.650878 systemd-networkd[1388]: eth0: Gained IPv6LL Apr 21 09:57:47.652328 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 21 09:57:47.669940 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 09:57:47.680887 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 09:57:47.685671 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 21 09:57:47.686530 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 09:57:47.765844 sshd[1572]: Accepted publickey for core from 50.85.169.122 port 33504 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:47.769000 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:47.783264 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 09:57:47.792899 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 09:57:47.800073 systemd-logind[1464]: New session 1 of user core. Apr 21 09:57:47.809573 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 09:57:47.822542 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 09:57:47.828780 (systemd)[1584]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 09:57:47.933031 systemd[1584]: Queued start job for default target default.target. Apr 21 09:57:47.941254 systemd[1584]: Created slice app.slice - User Application Slice. Apr 21 09:57:47.941293 systemd[1584]: Reached target paths.target - Paths. Apr 21 09:57:47.941310 systemd[1584]: Reached target timers.target - Timers. Apr 21 09:57:47.942999 systemd[1584]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 09:57:47.979464 systemd[1584]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 09:57:47.979736 systemd[1584]: Reached target sockets.target - Sockets. Apr 21 09:57:47.979753 systemd[1584]: Reached target basic.target - Basic System. Apr 21 09:57:47.979795 systemd[1584]: Reached target default.target - Main User Target. Apr 21 09:57:47.979821 systemd[1584]: Startup finished in 143ms. Apr 21 09:57:47.980882 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 09:57:47.990784 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 09:57:48.103600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:57:48.108499 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 09:57:48.116850 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 09:57:48.117130 systemd[1]: Started sshd@1-178.104.214.66:22-50.85.169.122:33520.service - OpenSSH per-connection server daemon (50.85.169.122:33520). Apr 21 09:57:48.120752 systemd[1]: Startup finished in 805ms (kernel) + 11.583s (initrd) + 4.881s (userspace) = 17.270s. Apr 21 09:57:48.244556 sshd[1601]: Accepted publickey for core from 50.85.169.122 port 33520 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:48.247125 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:48.254071 systemd-logind[1464]: New session 2 of user core. Apr 21 09:57:48.260658 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 09:57:48.360712 sshd[1601]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:48.367756 systemd[1]: sshd@1-178.104.214.66:22-50.85.169.122:33520.service: Deactivated successfully. Apr 21 09:57:48.370998 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 09:57:48.371842 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Apr 21 09:57:48.374873 systemd-logind[1464]: Removed session 2. Apr 21 09:57:48.393691 systemd[1]: Started sshd@2-178.104.214.66:22-50.85.169.122:33522.service - OpenSSH per-connection server daemon (50.85.169.122:33522). Apr 21 09:57:48.516562 sshd[1616]: Accepted publickey for core from 50.85.169.122 port 33522 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:48.518710 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:48.525142 systemd-logind[1464]: New session 3 of user core. Apr 21 09:57:48.531606 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 09:57:48.582305 kubelet[1599]: E0421 09:57:48.582158 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 09:57:48.586223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 09:57:48.586512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 09:57:48.630940 sshd[1616]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:48.637995 systemd[1]: sshd@2-178.104.214.66:22-50.85.169.122:33522.service: Deactivated successfully. Apr 21 09:57:48.641075 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 09:57:48.642455 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Apr 21 09:57:48.643786 systemd-logind[1464]: Removed session 3. Apr 21 09:57:48.670051 systemd[1]: Started sshd@3-178.104.214.66:22-50.85.169.122:33528.service - OpenSSH per-connection server daemon (50.85.169.122:33528). Apr 21 09:57:48.796232 sshd[1626]: Accepted publickey for core from 50.85.169.122 port 33528 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:48.798755 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:48.803914 systemd-logind[1464]: New session 4 of user core. Apr 21 09:57:48.813003 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 09:57:48.913787 sshd[1626]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:48.919973 systemd[1]: sshd@3-178.104.214.66:22-50.85.169.122:33528.service: Deactivated successfully. Apr 21 09:57:48.923072 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 09:57:48.924161 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Apr 21 09:57:48.925657 systemd-logind[1464]: Removed session 4. Apr 21 09:57:48.935512 systemd[1]: Started sshd@4-178.104.214.66:22-50.85.169.122:33540.service - OpenSSH per-connection server daemon (50.85.169.122:33540). Apr 21 09:57:49.064870 sshd[1633]: Accepted publickey for core from 50.85.169.122 port 33540 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:49.067280 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:49.071984 systemd-logind[1464]: New session 5 of user core. Apr 21 09:57:49.082789 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 09:57:49.178095 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 09:57:49.179147 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:49.194588 sudo[1636]: pam_unix(sudo:session): session closed for user root Apr 21 09:57:49.212278 sshd[1633]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:49.218596 systemd[1]: sshd@4-178.104.214.66:22-50.85.169.122:33540.service: Deactivated successfully. Apr 21 09:57:49.220865 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 09:57:49.221661 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Apr 21 09:57:49.223037 systemd-logind[1464]: Removed session 5. Apr 21 09:57:49.244929 systemd[1]: Started sshd@5-178.104.214.66:22-50.85.169.122:46120.service - OpenSSH per-connection server daemon (50.85.169.122:46120). Apr 21 09:57:49.360664 sshd[1641]: Accepted publickey for core from 50.85.169.122 port 46120 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:49.362899 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:49.369121 systemd-logind[1464]: New session 6 of user core. Apr 21 09:57:49.374670 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 09:57:49.456874 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 09:57:49.457816 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:49.463176 sudo[1645]: pam_unix(sudo:session): session closed for user root Apr 21 09:57:49.469067 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 09:57:49.469400 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:49.496003 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 09:57:49.499922 auditctl[1648]: No rules Apr 21 09:57:49.501385 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 09:57:49.503525 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 09:57:49.510012 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 09:57:49.538648 augenrules[1666]: No rules Apr 21 09:57:49.540345 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 09:57:49.542345 sudo[1644]: pam_unix(sudo:session): session closed for user root Apr 21 09:57:49.558085 sshd[1641]: pam_unix(sshd:session): session closed for user core Apr 21 09:57:49.563582 systemd[1]: sshd@5-178.104.214.66:22-50.85.169.122:46120.service: Deactivated successfully. Apr 21 09:57:49.565371 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 09:57:49.566345 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Apr 21 09:57:49.567915 systemd-logind[1464]: Removed session 6. Apr 21 09:57:49.587921 systemd[1]: Started sshd@6-178.104.214.66:22-50.85.169.122:46126.service - OpenSSH per-connection server daemon (50.85.169.122:46126). Apr 21 09:57:49.714157 sshd[1674]: Accepted publickey for core from 50.85.169.122 port 46126 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:57:49.716659 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:57:49.724535 systemd-logind[1464]: New session 7 of user core. Apr 21 09:57:49.730804 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 09:57:49.818256 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 09:57:49.818673 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 09:57:50.121893 (dockerd)[1693]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 09:57:50.121940 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 09:57:50.380876 dockerd[1693]: time="2026-04-21T09:57:50.380753640Z" level=info msg="Starting up" Apr 21 09:57:50.460344 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport816644587-merged.mount: Deactivated successfully. Apr 21 09:57:50.572098 dockerd[1693]: time="2026-04-21T09:57:50.572043560Z" level=info msg="Loading containers: start." Apr 21 09:57:50.688775 kernel: Initializing XFRM netlink socket Apr 21 09:57:50.712283 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 21 09:57:50.714128 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 21 09:57:50.723239 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 21 09:57:50.776602 systemd-networkd[1388]: docker0: Link UP Apr 21 09:57:50.776842 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Apr 21 09:57:50.803379 dockerd[1693]: time="2026-04-21T09:57:50.803274240Z" level=info msg="Loading containers: done." Apr 21 09:57:50.819575 dockerd[1693]: time="2026-04-21T09:57:50.819487840Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 09:57:50.819818 dockerd[1693]: time="2026-04-21T09:57:50.819661400Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 09:57:50.819862 dockerd[1693]: time="2026-04-21T09:57:50.819839840Z" level=info msg="Daemon has completed initialization" Apr 21 09:57:50.862827 dockerd[1693]: time="2026-04-21T09:57:50.862080400Z" level=info msg="API listen on /run/docker.sock" Apr 21 09:57:50.862203 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 09:57:51.309703 containerd[1491]: time="2026-04-21T09:57:51.309663960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 09:57:51.888737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount856375856.mount: Deactivated successfully. Apr 21 09:57:53.055737 containerd[1491]: time="2026-04-21T09:57:53.055663400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:53.057791 containerd[1491]: time="2026-04-21T09:57:53.057740400Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=24608883" Apr 21 09:57:53.058883 containerd[1491]: time="2026-04-21T09:57:53.058287440Z" level=info msg="ImageCreate event name:\"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:53.069842 containerd[1491]: time="2026-04-21T09:57:53.069768920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:53.075432 containerd[1491]: time="2026-04-21T09:57:53.075346640Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"24605384\" in 1.76563944s" Apr 21 09:57:53.075634 containerd[1491]: time="2026-04-21T09:57:53.075612800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\"" Apr 21 09:57:53.077765 containerd[1491]: time="2026-04-21T09:57:53.077713040Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 09:57:54.279823 containerd[1491]: time="2026-04-21T09:57:54.278845080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:54.280266 containerd[1491]: time="2026-04-21T09:57:54.279929080Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=19073314" Apr 21 09:57:54.282251 containerd[1491]: time="2026-04-21T09:57:54.281736960Z" level=info msg="ImageCreate event name:\"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:54.286712 containerd[1491]: time="2026-04-21T09:57:54.286661120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:54.288612 containerd[1491]: time="2026-04-21T09:57:54.288573840Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"20579933\" in 1.21081744s" Apr 21 09:57:54.288746 containerd[1491]: time="2026-04-21T09:57:54.288725680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\"" Apr 21 09:57:54.289815 containerd[1491]: time="2026-04-21T09:57:54.289789480Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 09:57:55.308169 containerd[1491]: time="2026-04-21T09:57:55.307983160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:55.309729 containerd[1491]: time="2026-04-21T09:57:55.309662400Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=13800856" Apr 21 09:57:55.311506 containerd[1491]: time="2026-04-21T09:57:55.311446920Z" level=info msg="ImageCreate event name:\"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:55.315654 containerd[1491]: time="2026-04-21T09:57:55.315567160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:55.317252 containerd[1491]: time="2026-04-21T09:57:55.316832960Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"15307493\" in 1.02691744s" Apr 21 09:57:55.317252 containerd[1491]: time="2026-04-21T09:57:55.316884680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\"" Apr 21 09:57:55.317528 containerd[1491]: time="2026-04-21T09:57:55.317503040Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 09:57:56.188962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount455441471.mount: Deactivated successfully. Apr 21 09:57:56.410398 containerd[1491]: time="2026-04-21T09:57:56.410301400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:56.411842 containerd[1491]: time="2026-04-21T09:57:56.411796480Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=22340610" Apr 21 09:57:56.413221 containerd[1491]: time="2026-04-21T09:57:56.412432560Z" level=info msg="ImageCreate event name:\"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:56.415132 containerd[1491]: time="2026-04-21T09:57:56.415094960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:56.415746 containerd[1491]: time="2026-04-21T09:57:56.415708400Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"22339603\" in 1.09817156s" Apr 21 09:57:56.415802 containerd[1491]: time="2026-04-21T09:57:56.415745560Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\"" Apr 21 09:57:56.416397 containerd[1491]: time="2026-04-21T09:57:56.416336000Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 09:57:56.982833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061865714.mount: Deactivated successfully. Apr 21 09:57:58.068827 containerd[1491]: time="2026-04-21T09:57:58.068548080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:58.070183 containerd[1491]: time="2026-04-21T09:57:58.070144360Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172309" Apr 21 09:57:58.071331 containerd[1491]: time="2026-04-21T09:57:58.070870480Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:58.074439 containerd[1491]: time="2026-04-21T09:57:58.074388160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:58.076491 containerd[1491]: time="2026-04-21T09:57:58.075889800Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.65949512s" Apr 21 09:57:58.076623 containerd[1491]: time="2026-04-21T09:57:58.076607280Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 21 09:57:58.077221 containerd[1491]: time="2026-04-21T09:57:58.077057680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 09:57:58.563168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400603525.mount: Deactivated successfully. Apr 21 09:57:58.571974 containerd[1491]: time="2026-04-21T09:57:58.571551240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:58.574448 containerd[1491]: time="2026-04-21T09:57:58.573264440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Apr 21 09:57:58.574448 containerd[1491]: time="2026-04-21T09:57:58.573521600Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:58.576111 containerd[1491]: time="2026-04-21T09:57:58.576028640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:58.577537 containerd[1491]: time="2026-04-21T09:57:58.576841640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 499.51148ms" Apr 21 09:57:58.577537 containerd[1491]: time="2026-04-21T09:57:58.576880720Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 21 09:57:58.577919 containerd[1491]: time="2026-04-21T09:57:58.577831600Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 09:57:58.836767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 09:57:58.842865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:57:58.987982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:57:58.998450 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 09:57:59.056351 kubelet[1976]: E0421 09:57:59.056271 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 09:57:59.060449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 09:57:59.061214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 09:57:59.106662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666207524.mount: Deactivated successfully. Apr 21 09:57:59.806087 containerd[1491]: time="2026-04-21T09:57:59.805902680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:59.810004 containerd[1491]: time="2026-04-21T09:57:59.809925400Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21752394" Apr 21 09:57:59.812098 containerd[1491]: time="2026-04-21T09:57:59.811985160Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:59.815792 containerd[1491]: time="2026-04-21T09:57:59.815657560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:57:59.816391 containerd[1491]: time="2026-04-21T09:57:59.816176120Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.2381458s" Apr 21 09:57:59.816391 containerd[1491]: time="2026-04-21T09:57:59.816219040Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 21 09:58:03.195941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:03.203776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:03.242900 systemd[1]: Reloading requested from client PID 2071 ('systemctl') (unit session-7.scope)... Apr 21 09:58:03.242926 systemd[1]: Reloading... Apr 21 09:58:03.365458 zram_generator::config[2114]: No configuration found. Apr 21 09:58:03.467936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:58:03.539309 systemd[1]: Reloading finished in 295 ms. Apr 21 09:58:03.595577 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:03.599312 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 09:58:03.599616 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:03.603807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:03.735771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:03.750144 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 09:58:03.802575 kubelet[2161]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:04.531445 kubelet[2161]: I0421 09:58:04.530707 2161 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 09:58:04.531445 kubelet[2161]: I0421 09:58:04.530767 2161 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 09:58:04.531445 kubelet[2161]: I0421 09:58:04.530796 2161 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 09:58:04.531445 kubelet[2161]: I0421 09:58:04.530802 2161 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 09:58:04.531445 kubelet[2161]: I0421 09:58:04.531392 2161 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 09:58:04.539625 kubelet[2161]: E0421 09:58:04.539573 2161 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://178.104.214.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 178.104.214.66:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 09:58:04.541405 kubelet[2161]: I0421 09:58:04.541366 2161 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 09:58:04.547309 kubelet[2161]: E0421 09:58:04.547226 2161 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 09:58:04.547467 kubelet[2161]: I0421 09:58:04.547383 2161 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 09:58:04.550098 kubelet[2161]: I0421 09:58:04.550032 2161 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 09:58:04.551208 kubelet[2161]: I0421 09:58:04.551133 2161 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 09:58:04.551373 kubelet[2161]: I0421 09:58:04.551191 2161 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-a-ee081c135b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 09:58:04.551373 kubelet[2161]: I0421 09:58:04.551359 2161 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 09:58:04.551373 kubelet[2161]: I0421 09:58:04.551368 2161 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 09:58:04.551612 kubelet[2161]: I0421 09:58:04.551506 2161 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 09:58:04.553550 kubelet[2161]: I0421 09:58:04.553507 2161 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 09:58:04.553774 kubelet[2161]: I0421 09:58:04.553759 2161 kubelet.go:482] "Attempting to sync node with API server" Apr 21 09:58:04.553774 kubelet[2161]: I0421 09:58:04.553780 2161 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 09:58:04.555522 kubelet[2161]: I0421 09:58:04.553799 2161 kubelet.go:394] "Adding apiserver pod source" Apr 21 09:58:04.555522 kubelet[2161]: I0421 09:58:04.553809 2161 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 09:58:04.557448 kubelet[2161]: I0421 09:58:04.557371 2161 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 09:58:04.558643 kubelet[2161]: I0421 09:58:04.558614 2161 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 09:58:04.558720 kubelet[2161]: I0421 09:58:04.558652 2161 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 09:58:04.558720 kubelet[2161]: W0421 09:58:04.558697 2161 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 09:58:04.561354 kubelet[2161]: I0421 09:58:04.561281 2161 server.go:1257] "Started kubelet" Apr 21 09:58:04.564281 kubelet[2161]: I0421 09:58:04.564227 2161 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 09:58:04.567501 kubelet[2161]: I0421 09:58:04.567398 2161 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 09:58:04.567501 kubelet[2161]: I0421 09:58:04.567509 2161 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 09:58:04.567705 kubelet[2161]: I0421 09:58:04.567688 2161 server.go:317] "Adding debug handlers to kubelet server" Apr 21 09:58:04.567865 kubelet[2161]: I0421 09:58:04.567837 2161 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 09:58:04.571768 kubelet[2161]: I0421 09:58:04.571738 2161 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 09:58:04.575527 kubelet[2161]: E0421 09:58:04.574200 2161 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.104.214.66:6443/api/v1/namespaces/default/events\": dial tcp 178.104.214.66:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-a-ee081c135b.18a856cb6472be50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-a-ee081c135b,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-a-ee081c135b,},FirstTimestamp:2026-04-21 09:58:04.56125192 +0000 UTC m=+0.805415721,LastTimestamp:2026-04-21 09:58:04.56125192 +0000 UTC m=+0.805415721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-a-ee081c135b,}" Apr 21 09:58:04.577779 kubelet[2161]: I0421 09:58:04.577749 2161 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 09:58:04.579677 kubelet[2161]: I0421 09:58:04.579653 2161 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 09:58:04.580088 kubelet[2161]: E0421 09:58:04.580030 2161 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-ee081c135b\" not found" Apr 21 09:58:04.581721 kubelet[2161]: E0421 09:58:04.581669 2161 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.214.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-a-ee081c135b?timeout=10s\": dial tcp 178.104.214.66:6443: connect: connection refused" interval="200ms" Apr 21 09:58:04.582074 kubelet[2161]: I0421 09:58:04.582035 2161 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 09:58:04.582697 kubelet[2161]: I0421 09:58:04.582658 2161 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 09:58:04.583941 kubelet[2161]: I0421 09:58:04.583905 2161 reconciler.go:29] "Reconciler: start to sync state" Apr 21 09:58:04.584787 kubelet[2161]: E0421 09:58:04.584754 2161 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 09:58:04.585193 kubelet[2161]: I0421 09:58:04.585159 2161 factory.go:223] Registration of the containerd container factory successfully Apr 21 09:58:04.585242 kubelet[2161]: I0421 09:58:04.585199 2161 factory.go:223] Registration of the systemd container factory successfully Apr 21 09:58:04.594999 kubelet[2161]: I0421 09:58:04.594961 2161 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 09:58:04.596142 kubelet[2161]: I0421 09:58:04.596119 2161 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 09:58:04.596480 kubelet[2161]: I0421 09:58:04.596239 2161 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 09:58:04.596480 kubelet[2161]: I0421 09:58:04.596267 2161 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 09:58:04.596480 kubelet[2161]: E0421 09:58:04.596309 2161 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 09:58:04.612327 kubelet[2161]: I0421 09:58:04.612300 2161 cpu_manager.go:225] "Starting" policy="none" Apr 21 09:58:04.612327 kubelet[2161]: I0421 09:58:04.612320 2161 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 09:58:04.612510 kubelet[2161]: I0421 09:58:04.612345 2161 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 09:58:04.615455 kubelet[2161]: I0421 09:58:04.615190 2161 policy_none.go:50] "Start" Apr 21 09:58:04.615455 kubelet[2161]: I0421 09:58:04.615213 2161 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 09:58:04.615455 kubelet[2161]: I0421 09:58:04.615224 2161 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 09:58:04.616726 kubelet[2161]: I0421 09:58:04.616704 2161 policy_none.go:44] "Start" Apr 21 09:58:04.620801 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 09:58:04.634723 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 09:58:04.651613 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 09:58:04.653994 kubelet[2161]: E0421 09:58:04.653950 2161 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 09:58:04.655552 kubelet[2161]: I0421 09:58:04.654940 2161 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 09:58:04.655794 kubelet[2161]: I0421 09:58:04.655584 2161 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 09:58:04.656138 kubelet[2161]: I0421 09:58:04.656106 2161 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 09:58:04.656966 kubelet[2161]: E0421 09:58:04.656938 2161 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 09:58:04.657153 kubelet[2161]: E0421 09:58:04.657132 2161 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-a-ee081c135b\" not found" Apr 21 09:58:04.714082 systemd[1]: Created slice kubepods-burstable-pod94d9eba51f75fff97e011e61fc339c31.slice - libcontainer container kubepods-burstable-pod94d9eba51f75fff97e011e61fc339c31.slice. Apr 21 09:58:04.727617 kubelet[2161]: E0421 09:58:04.727221 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-ee081c135b\" not found" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.734491 systemd[1]: Created slice kubepods-burstable-pod1aec3c804dcbb843caade117ce8a9be5.slice - libcontainer container kubepods-burstable-pod1aec3c804dcbb843caade117ce8a9be5.slice. Apr 21 09:58:04.743225 kubelet[2161]: E0421 09:58:04.743148 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-ee081c135b\" not found" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.746898 systemd[1]: Created slice kubepods-burstable-poda46e34b577dcea600f7e67a6313dab1d.slice - libcontainer container kubepods-burstable-poda46e34b577dcea600f7e67a6313dab1d.slice. Apr 21 09:58:04.750123 kubelet[2161]: E0421 09:58:04.750066 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-ee081c135b\" not found" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.758970 kubelet[2161]: I0421 09:58:04.758872 2161 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.761544 kubelet[2161]: E0421 09:58:04.761480 2161 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://178.104.214.66:6443/api/v1/nodes\": dial tcp 178.104.214.66:6443: connect: connection refused" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.783487 kubelet[2161]: E0421 09:58:04.783033 2161 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.214.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-a-ee081c135b?timeout=10s\": dial tcp 178.104.214.66:6443: connect: connection refused" interval="400ms" Apr 21 09:58:04.786075 kubelet[2161]: I0421 09:58:04.785450 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1aec3c804dcbb843caade117ce8a9be5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-a-ee081c135b\" (UID: \"1aec3c804dcbb843caade117ce8a9be5\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.786075 kubelet[2161]: I0421 09:58:04.785527 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.786075 kubelet[2161]: I0421 09:58:04.785594 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.786075 kubelet[2161]: I0421 09:58:04.785678 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94d9eba51f75fff97e011e61fc339c31-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-a-ee081c135b\" (UID: \"94d9eba51f75fff97e011e61fc339c31\") " pod="kube-system/kube-scheduler-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.786075 kubelet[2161]: I0421 09:58:04.785711 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1aec3c804dcbb843caade117ce8a9be5-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-a-ee081c135b\" (UID: \"1aec3c804dcbb843caade117ce8a9be5\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.786493 kubelet[2161]: I0421 09:58:04.785764 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.786493 kubelet[2161]: I0421 09:58:04.785809 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.786493 kubelet[2161]: I0421 09:58:04.785844 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.786493 kubelet[2161]: I0421 09:58:04.785887 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1aec3c804dcbb843caade117ce8a9be5-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-a-ee081c135b\" (UID: \"1aec3c804dcbb843caade117ce8a9be5\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.965275 kubelet[2161]: I0421 09:58:04.965208 2161 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:04.965896 kubelet[2161]: E0421 09:58:04.965571 2161 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://178.104.214.66:6443/api/v1/nodes\": dial tcp 178.104.214.66:6443: connect: connection refused" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:05.033195 containerd[1491]: time="2026-04-21T09:58:05.033133280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-a-ee081c135b,Uid:94d9eba51f75fff97e011e61fc339c31,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:05.050327 containerd[1491]: time="2026-04-21T09:58:05.050162000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-a-ee081c135b,Uid:1aec3c804dcbb843caade117ce8a9be5,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:05.053588 containerd[1491]: time="2026-04-21T09:58:05.053236640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-a-ee081c135b,Uid:a46e34b577dcea600f7e67a6313dab1d,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:05.185283 kubelet[2161]: E0421 09:58:05.183752 2161 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.214.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-a-ee081c135b?timeout=10s\": dial tcp 178.104.214.66:6443: connect: connection refused" interval="800ms" Apr 21 09:58:05.368566 kubelet[2161]: I0421 09:58:05.368400 2161 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:05.369193 kubelet[2161]: E0421 09:58:05.368748 2161 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://178.104.214.66:6443/api/v1/nodes\": dial tcp 178.104.214.66:6443: connect: connection refused" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:05.502260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842029828.mount: Deactivated successfully. Apr 21 09:58:05.517088 containerd[1491]: time="2026-04-21T09:58:05.515898240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:05.517874 containerd[1491]: time="2026-04-21T09:58:05.517834920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:05.520512 containerd[1491]: time="2026-04-21T09:58:05.520469880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 09:58:05.521743 containerd[1491]: time="2026-04-21T09:58:05.521698000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 09:58:05.523864 containerd[1491]: time="2026-04-21T09:58:05.522629880Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:05.524929 containerd[1491]: time="2026-04-21T09:58:05.524877920Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:05.527572 containerd[1491]: time="2026-04-21T09:58:05.527523320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 21 09:58:05.530325 containerd[1491]: time="2026-04-21T09:58:05.530274040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 09:58:05.532603 containerd[1491]: time="2026-04-21T09:58:05.532559200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 479.20748ms" Apr 21 09:58:05.533568 containerd[1491]: time="2026-04-21T09:58:05.533532960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.2864ms" Apr 21 09:58:05.536333 containerd[1491]: time="2026-04-21T09:58:05.536291800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.03376ms" Apr 21 09:58:05.688407 containerd[1491]: time="2026-04-21T09:58:05.687584520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:05.688407 containerd[1491]: time="2026-04-21T09:58:05.687636160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:05.688407 containerd[1491]: time="2026-04-21T09:58:05.687646840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:05.688407 containerd[1491]: time="2026-04-21T09:58:05.687718280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:05.695788 containerd[1491]: time="2026-04-21T09:58:05.694496160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:05.695788 containerd[1491]: time="2026-04-21T09:58:05.694588160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:05.695788 containerd[1491]: time="2026-04-21T09:58:05.694635720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:05.696342 containerd[1491]: time="2026-04-21T09:58:05.696071600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:05.696342 containerd[1491]: time="2026-04-21T09:58:05.696138920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:05.696342 containerd[1491]: time="2026-04-21T09:58:05.696162640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:05.696342 containerd[1491]: time="2026-04-21T09:58:05.696254640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:05.696953 containerd[1491]: time="2026-04-21T09:58:05.696833240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:05.721670 systemd[1]: Started cri-containerd-a9718f7546b8f80b0df6353c17a410fa0e8f947099b9f73bc0a88e57761c7cf5.scope - libcontainer container a9718f7546b8f80b0df6353c17a410fa0e8f947099b9f73bc0a88e57761c7cf5. Apr 21 09:58:05.725554 systemd[1]: Started cri-containerd-9757a1fb1b5e90b534d582a7c3b7f17cabe083e1408407026ad0452ae0b21cfd.scope - libcontainer container 9757a1fb1b5e90b534d582a7c3b7f17cabe083e1408407026ad0452ae0b21cfd. Apr 21 09:58:05.742907 systemd[1]: Started cri-containerd-e05e74114e8124a7bb1e7ef21359b0886438be170233c5db15366559c2be2e54.scope - libcontainer container e05e74114e8124a7bb1e7ef21359b0886438be170233c5db15366559c2be2e54. Apr 21 09:58:05.805870 containerd[1491]: time="2026-04-21T09:58:05.805674200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-a-ee081c135b,Uid:a46e34b577dcea600f7e67a6313dab1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9718f7546b8f80b0df6353c17a410fa0e8f947099b9f73bc0a88e57761c7cf5\"" Apr 21 09:58:05.810502 containerd[1491]: time="2026-04-21T09:58:05.810286560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-a-ee081c135b,Uid:94d9eba51f75fff97e011e61fc339c31,Namespace:kube-system,Attempt:0,} returns sandbox id \"9757a1fb1b5e90b534d582a7c3b7f17cabe083e1408407026ad0452ae0b21cfd\"" Apr 21 09:58:05.818743 containerd[1491]: time="2026-04-21T09:58:05.818333560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-a-ee081c135b,Uid:1aec3c804dcbb843caade117ce8a9be5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e05e74114e8124a7bb1e7ef21359b0886438be170233c5db15366559c2be2e54\"" Apr 21 09:58:05.822393 containerd[1491]: time="2026-04-21T09:58:05.822180440Z" level=info msg="CreateContainer within sandbox \"a9718f7546b8f80b0df6353c17a410fa0e8f947099b9f73bc0a88e57761c7cf5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 09:58:05.824464 containerd[1491]: time="2026-04-21T09:58:05.824395800Z" level=info msg="CreateContainer within sandbox \"9757a1fb1b5e90b534d582a7c3b7f17cabe083e1408407026ad0452ae0b21cfd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 09:58:05.826857 containerd[1491]: time="2026-04-21T09:58:05.826810320Z" level=info msg="CreateContainer within sandbox \"e05e74114e8124a7bb1e7ef21359b0886438be170233c5db15366559c2be2e54\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 09:58:05.849959 containerd[1491]: time="2026-04-21T09:58:05.849906920Z" level=info msg="CreateContainer within sandbox \"9757a1fb1b5e90b534d582a7c3b7f17cabe083e1408407026ad0452ae0b21cfd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c0a60cd2d48ea574f4d4404265709cb88588381a3241a398c42881c310208c75\"" Apr 21 09:58:05.850966 containerd[1491]: time="2026-04-21T09:58:05.850911560Z" level=info msg="StartContainer for \"c0a60cd2d48ea574f4d4404265709cb88588381a3241a398c42881c310208c75\"" Apr 21 09:58:05.855784 containerd[1491]: time="2026-04-21T09:58:05.855724920Z" level=info msg="CreateContainer within sandbox \"a9718f7546b8f80b0df6353c17a410fa0e8f947099b9f73bc0a88e57761c7cf5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"38ccbef90212d4d5c6ac93f99dd523cbdd0d5ad6f6bfc048c823fa109d8bba7b\"" Apr 21 09:58:05.856699 containerd[1491]: time="2026-04-21T09:58:05.856669160Z" level=info msg="StartContainer for \"38ccbef90212d4d5c6ac93f99dd523cbdd0d5ad6f6bfc048c823fa109d8bba7b\"" Apr 21 09:58:05.863511 containerd[1491]: time="2026-04-21T09:58:05.863464480Z" level=info msg="CreateContainer within sandbox \"e05e74114e8124a7bb1e7ef21359b0886438be170233c5db15366559c2be2e54\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b79a5ec90babc560a84882b8d0f44508a83bb12690b4ca851c77c35dc54c86d\"" Apr 21 09:58:05.865471 containerd[1491]: time="2026-04-21T09:58:05.864122240Z" level=info msg="StartContainer for \"8b79a5ec90babc560a84882b8d0f44508a83bb12690b4ca851c77c35dc54c86d\"" Apr 21 09:58:05.891649 systemd[1]: Started cri-containerd-c0a60cd2d48ea574f4d4404265709cb88588381a3241a398c42881c310208c75.scope - libcontainer container c0a60cd2d48ea574f4d4404265709cb88588381a3241a398c42881c310208c75. Apr 21 09:58:05.904594 systemd[1]: Started cri-containerd-38ccbef90212d4d5c6ac93f99dd523cbdd0d5ad6f6bfc048c823fa109d8bba7b.scope - libcontainer container 38ccbef90212d4d5c6ac93f99dd523cbdd0d5ad6f6bfc048c823fa109d8bba7b. Apr 21 09:58:05.913768 systemd[1]: Started cri-containerd-8b79a5ec90babc560a84882b8d0f44508a83bb12690b4ca851c77c35dc54c86d.scope - libcontainer container 8b79a5ec90babc560a84882b8d0f44508a83bb12690b4ca851c77c35dc54c86d. Apr 21 09:58:05.974138 containerd[1491]: time="2026-04-21T09:58:05.973360840Z" level=info msg="StartContainer for \"c0a60cd2d48ea574f4d4404265709cb88588381a3241a398c42881c310208c75\" returns successfully" Apr 21 09:58:05.984906 kubelet[2161]: E0421 09:58:05.984851 2161 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://178.104.214.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-a-ee081c135b?timeout=10s\": dial tcp 178.104.214.66:6443: connect: connection refused" interval="1.6s" Apr 21 09:58:05.988615 containerd[1491]: time="2026-04-21T09:58:05.988108040Z" level=info msg="StartContainer for \"38ccbef90212d4d5c6ac93f99dd523cbdd0d5ad6f6bfc048c823fa109d8bba7b\" returns successfully" Apr 21 09:58:05.992678 containerd[1491]: time="2026-04-21T09:58:05.992522440Z" level=info msg="StartContainer for \"8b79a5ec90babc560a84882b8d0f44508a83bb12690b4ca851c77c35dc54c86d\" returns successfully" Apr 21 09:58:06.171555 kubelet[2161]: I0421 09:58:06.171524 2161 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:06.620446 kubelet[2161]: E0421 09:58:06.619972 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-ee081c135b\" not found" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:06.622809 kubelet[2161]: E0421 09:58:06.622478 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-ee081c135b\" not found" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:06.627492 kubelet[2161]: E0421 09:58:06.625798 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-ee081c135b\" not found" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.627450 kubelet[2161]: E0421 09:58:07.626990 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-ee081c135b\" not found" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.630378 kubelet[2161]: E0421 09:58:07.630101 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-ee081c135b\" not found" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.650489 kubelet[2161]: E0421 09:58:07.650440 2161 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-a-ee081c135b\" not found" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.721946 kubelet[2161]: I0421 09:58:07.721669 2161 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.781654 kubelet[2161]: I0421 09:58:07.781618 2161 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.824450 kubelet[2161]: E0421 09:58:07.822926 2161 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-a-ee081c135b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.824450 kubelet[2161]: I0421 09:58:07.822962 2161 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.833968 kubelet[2161]: E0421 09:58:07.833707 2161 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-a-ee081c135b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.833968 kubelet[2161]: I0421 09:58:07.833746 2161 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:07.839178 kubelet[2161]: E0421 09:58:07.839129 2161 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:08.558724 kubelet[2161]: I0421 09:58:08.558344 2161 apiserver.go:52] "Watching apiserver" Apr 21 09:58:08.583035 kubelet[2161]: I0421 09:58:08.582878 2161 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 09:58:08.628318 kubelet[2161]: I0421 09:58:08.628012 2161 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:10.201592 systemd[1]: Reloading requested from client PID 2445 ('systemctl') (unit session-7.scope)... Apr 21 09:58:10.201934 systemd[1]: Reloading... Apr 21 09:58:10.300478 zram_generator::config[2488]: No configuration found. Apr 21 09:58:10.406903 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 09:58:10.490153 systemd[1]: Reloading finished in 287 ms. Apr 21 09:58:10.533807 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:10.552103 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 09:58:10.552459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:10.552538 systemd[1]: kubelet.service: Consumed 1.232s CPU time, 123.5M memory peak, 0B memory swap peak. Apr 21 09:58:10.557869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 09:58:10.705666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 09:58:10.719472 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 09:58:10.794573 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 09:58:10.807853 kubelet[2531]: I0421 09:58:10.807797 2531 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 09:58:10.808470 kubelet[2531]: I0421 09:58:10.808001 2531 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 09:58:10.808470 kubelet[2531]: I0421 09:58:10.808253 2531 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 09:58:10.808470 kubelet[2531]: I0421 09:58:10.808266 2531 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 09:58:10.808998 kubelet[2531]: I0421 09:58:10.808979 2531 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 09:58:10.811592 kubelet[2531]: I0421 09:58:10.811571 2531 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 09:58:10.814983 kubelet[2531]: I0421 09:58:10.814942 2531 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 09:58:10.822131 kubelet[2531]: E0421 09:58:10.821755 2531 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 09:58:10.822131 kubelet[2531]: I0421 09:58:10.821870 2531 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 09:58:10.825469 kubelet[2531]: I0421 09:58:10.825215 2531 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 09:58:10.826737 kubelet[2531]: I0421 09:58:10.825794 2531 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 09:58:10.826737 kubelet[2531]: I0421 09:58:10.825826 2531 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-a-ee081c135b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 09:58:10.826737 kubelet[2531]: I0421 09:58:10.825979 2531 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 09:58:10.826737 kubelet[2531]: I0421 09:58:10.825986 2531 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 09:58:10.826951 kubelet[2531]: I0421 09:58:10.826007 2531 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 09:58:10.826951 kubelet[2531]: I0421 09:58:10.826220 2531 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 09:58:10.826951 kubelet[2531]: I0421 09:58:10.826360 2531 kubelet.go:482] "Attempting to sync node with API server" Apr 21 09:58:10.826951 kubelet[2531]: I0421 09:58:10.826377 2531 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 09:58:10.826951 kubelet[2531]: I0421 09:58:10.826392 2531 kubelet.go:394] "Adding apiserver pod source" Apr 21 09:58:10.826951 kubelet[2531]: I0421 09:58:10.826405 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 09:58:10.829187 kubelet[2531]: I0421 09:58:10.829157 2531 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 09:58:10.830545 kubelet[2531]: I0421 09:58:10.830103 2531 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 09:58:10.830545 kubelet[2531]: I0421 09:58:10.830151 2531 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 09:58:10.832310 kubelet[2531]: I0421 09:58:10.832282 2531 server.go:1257] "Started kubelet" Apr 21 09:58:10.834559 kubelet[2531]: I0421 09:58:10.834504 2531 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 09:58:10.835172 kubelet[2531]: I0421 09:58:10.835116 2531 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 09:58:10.835247 kubelet[2531]: I0421 09:58:10.835183 2531 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 09:58:10.835439 kubelet[2531]: I0421 09:58:10.835395 2531 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 09:58:10.835908 kubelet[2531]: I0421 09:58:10.835891 2531 server.go:317] "Adding debug handlers to kubelet server" Apr 21 09:58:10.844449 kubelet[2531]: I0421 09:58:10.844059 2531 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 09:58:10.856779 kubelet[2531]: I0421 09:58:10.856737 2531 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 09:58:10.858394 kubelet[2531]: I0421 09:58:10.858364 2531 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 09:58:10.860749 kubelet[2531]: E0421 09:58:10.860709 2531 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-ee081c135b\" not found" Apr 21 09:58:10.861663 kubelet[2531]: I0421 09:58:10.861644 2531 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 09:58:10.862647 kubelet[2531]: I0421 09:58:10.861850 2531 reconciler.go:29] "Reconciler: start to sync state" Apr 21 09:58:10.863229 kubelet[2531]: I0421 09:58:10.863204 2531 factory.go:223] Registration of the systemd container factory successfully Apr 21 09:58:10.863493 kubelet[2531]: I0421 09:58:10.863475 2531 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 09:58:10.875636 kubelet[2531]: I0421 09:58:10.874841 2531 factory.go:223] Registration of the containerd container factory successfully Apr 21 09:58:10.876147 kubelet[2531]: I0421 09:58:10.876111 2531 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 09:58:10.878884 kubelet[2531]: I0421 09:58:10.878477 2531 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 09:58:10.878884 kubelet[2531]: I0421 09:58:10.878510 2531 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 09:58:10.878884 kubelet[2531]: I0421 09:58:10.878532 2531 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 09:58:10.878884 kubelet[2531]: E0421 09:58:10.878587 2531 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 09:58:10.886455 kubelet[2531]: E0421 09:58:10.886392 2531 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 09:58:10.946906 kubelet[2531]: I0421 09:58:10.946879 2531 cpu_manager.go:225] "Starting" policy="none" Apr 21 09:58:10.947467 kubelet[2531]: I0421 09:58:10.947069 2531 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 09:58:10.947467 kubelet[2531]: I0421 09:58:10.947095 2531 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 09:58:10.947467 kubelet[2531]: I0421 09:58:10.947246 2531 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 09:58:10.947467 kubelet[2531]: I0421 09:58:10.947258 2531 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 09:58:10.947467 kubelet[2531]: I0421 09:58:10.947276 2531 policy_none.go:50] "Start" Apr 21 09:58:10.947467 kubelet[2531]: I0421 09:58:10.947285 2531 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 09:58:10.947467 kubelet[2531]: I0421 09:58:10.947294 2531 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 09:58:10.947467 kubelet[2531]: I0421 09:58:10.947400 2531 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 09:58:10.947467 kubelet[2531]: I0421 09:58:10.947408 2531 policy_none.go:44] "Start" Apr 21 09:58:10.954948 kubelet[2531]: E0421 09:58:10.954447 2531 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 09:58:10.954948 kubelet[2531]: I0421 09:58:10.954644 2531 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 09:58:10.954948 kubelet[2531]: I0421 09:58:10.954655 2531 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 09:58:10.955977 kubelet[2531]: I0421 09:58:10.955583 2531 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 09:58:10.960501 kubelet[2531]: E0421 09:58:10.959391 2531 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 09:58:10.979539 kubelet[2531]: I0421 09:58:10.979498 2531 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:10.980499 kubelet[2531]: I0421 09:58:10.980338 2531 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:10.982459 kubelet[2531]: I0421 09:58:10.981787 2531 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:10.998179 kubelet[2531]: E0421 09:58:10.998121 2531 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-a-ee081c135b\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.062510 kubelet[2531]: I0421 09:58:11.060237 2531 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.075669 kubelet[2531]: I0421 09:58:11.075623 2531 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.075872 kubelet[2531]: I0421 09:58:11.075744 2531 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.162844 kubelet[2531]: I0421 09:58:11.162737 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.162844 kubelet[2531]: I0421 09:58:11.162824 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.163264 kubelet[2531]: I0421 09:58:11.162866 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.163264 kubelet[2531]: I0421 09:58:11.162912 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.163264 kubelet[2531]: I0421 09:58:11.162951 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a46e34b577dcea600f7e67a6313dab1d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-a-ee081c135b\" (UID: \"a46e34b577dcea600f7e67a6313dab1d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.163264 kubelet[2531]: I0421 09:58:11.162989 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94d9eba51f75fff97e011e61fc339c31-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-a-ee081c135b\" (UID: \"94d9eba51f75fff97e011e61fc339c31\") " pod="kube-system/kube-scheduler-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.163264 kubelet[2531]: I0421 09:58:11.163040 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1aec3c804dcbb843caade117ce8a9be5-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-a-ee081c135b\" (UID: \"1aec3c804dcbb843caade117ce8a9be5\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.163645 kubelet[2531]: I0421 09:58:11.163077 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1aec3c804dcbb843caade117ce8a9be5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-a-ee081c135b\" (UID: \"1aec3c804dcbb843caade117ce8a9be5\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.163645 kubelet[2531]: I0421 09:58:11.163123 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1aec3c804dcbb843caade117ce8a9be5-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-a-ee081c135b\" (UID: \"1aec3c804dcbb843caade117ce8a9be5\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" Apr 21 09:58:11.829930 kubelet[2531]: I0421 09:58:11.828614 2531 apiserver.go:52] "Watching apiserver" Apr 21 09:58:11.862784 kubelet[2531]: I0421 09:58:11.862700 2531 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 09:58:11.975206 kubelet[2531]: I0421 09:58:11.974753 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-a-ee081c135b" podStartSLOduration=1.9747274799999999 podStartE2EDuration="1.97472748s" podCreationTimestamp="2026-04-21 09:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:11.96058636 +0000 UTC m=+1.233678201" watchObservedRunningTime="2026-04-21 09:58:11.97472748 +0000 UTC m=+1.247819321" Apr 21 09:58:11.988975 kubelet[2531]: I0421 09:58:11.988913 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-a-ee081c135b" podStartSLOduration=3.98889644 podStartE2EDuration="3.98889644s" podCreationTimestamp="2026-04-21 09:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:11.97507716 +0000 UTC m=+1.248169001" watchObservedRunningTime="2026-04-21 09:58:11.98889644 +0000 UTC m=+1.261988281" Apr 21 09:58:12.007837 kubelet[2531]: I0421 09:58:12.007700 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-ee081c135b" podStartSLOduration=2.0076864 podStartE2EDuration="2.0076864s" podCreationTimestamp="2026-04-21 09:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:11.98992404 +0000 UTC m=+1.263015881" watchObservedRunningTime="2026-04-21 09:58:12.0076864 +0000 UTC m=+1.280778241" Apr 21 09:58:15.473861 kubelet[2531]: I0421 09:58:15.473751 2531 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 09:58:15.474976 containerd[1491]: time="2026-04-21T09:58:15.474785440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 09:58:15.476471 kubelet[2531]: I0421 09:58:15.475518 2531 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 09:58:16.540082 systemd[1]: Created slice kubepods-besteffort-pod8c1a058a_02d3_4fb1_bc84_de9c0de7f886.slice - libcontainer container kubepods-besteffort-pod8c1a058a_02d3_4fb1_bc84_de9c0de7f886.slice. Apr 21 09:58:16.601611 kubelet[2531]: I0421 09:58:16.601542 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c1a058a-02d3-4fb1-bc84-de9c0de7f886-kube-proxy\") pod \"kube-proxy-zlz6l\" (UID: \"8c1a058a-02d3-4fb1-bc84-de9c0de7f886\") " pod="kube-system/kube-proxy-zlz6l" Apr 21 09:58:16.601611 kubelet[2531]: I0421 09:58:16.601610 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c1a058a-02d3-4fb1-bc84-de9c0de7f886-xtables-lock\") pod \"kube-proxy-zlz6l\" (UID: \"8c1a058a-02d3-4fb1-bc84-de9c0de7f886\") " pod="kube-system/kube-proxy-zlz6l" Apr 21 09:58:16.602068 kubelet[2531]: I0421 09:58:16.601648 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c1a058a-02d3-4fb1-bc84-de9c0de7f886-lib-modules\") pod \"kube-proxy-zlz6l\" (UID: \"8c1a058a-02d3-4fb1-bc84-de9c0de7f886\") " pod="kube-system/kube-proxy-zlz6l" Apr 21 09:58:16.602068 kubelet[2531]: I0421 09:58:16.601684 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lfw\" (UniqueName: \"kubernetes.io/projected/8c1a058a-02d3-4fb1-bc84-de9c0de7f886-kube-api-access-w4lfw\") pod \"kube-proxy-zlz6l\" (UID: \"8c1a058a-02d3-4fb1-bc84-de9c0de7f886\") " pod="kube-system/kube-proxy-zlz6l" Apr 21 09:58:16.825966 systemd[1]: Created slice kubepods-besteffort-pod5aa600b3_b023_4f80_8dcd_08b52cb64c08.slice - libcontainer container kubepods-besteffort-pod5aa600b3_b023_4f80_8dcd_08b52cb64c08.slice. Apr 21 09:58:16.853460 containerd[1491]: time="2026-04-21T09:58:16.853236120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlz6l,Uid:8c1a058a-02d3-4fb1-bc84-de9c0de7f886,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:16.878485 containerd[1491]: time="2026-04-21T09:58:16.878086000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:16.878485 containerd[1491]: time="2026-04-21T09:58:16.878154640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:16.878485 containerd[1491]: time="2026-04-21T09:58:16.878170800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:16.878930 containerd[1491]: time="2026-04-21T09:58:16.878335600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:16.904401 kubelet[2531]: I0421 09:58:16.904365 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5aa600b3-b023-4f80-8dcd-08b52cb64c08-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-bsmjj\" (UID: \"5aa600b3-b023-4f80-8dcd-08b52cb64c08\") " pod="tigera-operator/tigera-operator-6cf4cccc57-bsmjj" Apr 21 09:58:16.904854 kubelet[2531]: I0421 09:58:16.904799 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp6xm\" (UniqueName: \"kubernetes.io/projected/5aa600b3-b023-4f80-8dcd-08b52cb64c08-kube-api-access-vp6xm\") pod \"tigera-operator-6cf4cccc57-bsmjj\" (UID: \"5aa600b3-b023-4f80-8dcd-08b52cb64c08\") " pod="tigera-operator/tigera-operator-6cf4cccc57-bsmjj" Apr 21 09:58:16.904903 systemd[1]: run-containerd-runc-k8s.io-481f8f107f91bc5005954d421c77cb124ce3b15075696e01b0c6178fb3fdb768-runc.KIbW8Z.mount: Deactivated successfully. Apr 21 09:58:16.915694 systemd[1]: Started cri-containerd-481f8f107f91bc5005954d421c77cb124ce3b15075696e01b0c6178fb3fdb768.scope - libcontainer container 481f8f107f91bc5005954d421c77cb124ce3b15075696e01b0c6178fb3fdb768. Apr 21 09:58:16.944133 containerd[1491]: time="2026-04-21T09:58:16.943979840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zlz6l,Uid:8c1a058a-02d3-4fb1-bc84-de9c0de7f886,Namespace:kube-system,Attempt:0,} returns sandbox id \"481f8f107f91bc5005954d421c77cb124ce3b15075696e01b0c6178fb3fdb768\"" Apr 21 09:58:16.951562 containerd[1491]: time="2026-04-21T09:58:16.951520320Z" level=info msg="CreateContainer within sandbox \"481f8f107f91bc5005954d421c77cb124ce3b15075696e01b0c6178fb3fdb768\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 09:58:16.969217 containerd[1491]: time="2026-04-21T09:58:16.969138520Z" level=info msg="CreateContainer within sandbox \"481f8f107f91bc5005954d421c77cb124ce3b15075696e01b0c6178fb3fdb768\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d218ba64d4ce200c2d8a79bc75277ef1eabe173b6520d69871dfb08fe55b041\"" Apr 21 09:58:16.970407 containerd[1491]: time="2026-04-21T09:58:16.970262520Z" level=info msg="StartContainer for \"0d218ba64d4ce200c2d8a79bc75277ef1eabe173b6520d69871dfb08fe55b041\"" Apr 21 09:58:17.000318 systemd[1]: Started cri-containerd-0d218ba64d4ce200c2d8a79bc75277ef1eabe173b6520d69871dfb08fe55b041.scope - libcontainer container 0d218ba64d4ce200c2d8a79bc75277ef1eabe173b6520d69871dfb08fe55b041. Apr 21 09:58:17.054741 containerd[1491]: time="2026-04-21T09:58:17.054027160Z" level=info msg="StartContainer for \"0d218ba64d4ce200c2d8a79bc75277ef1eabe173b6520d69871dfb08fe55b041\" returns successfully" Apr 21 09:58:17.135546 containerd[1491]: time="2026-04-21T09:58:17.134799200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-bsmjj,Uid:5aa600b3-b023-4f80-8dcd-08b52cb64c08,Namespace:tigera-operator,Attempt:0,}" Apr 21 09:58:17.163704 containerd[1491]: time="2026-04-21T09:58:17.163196240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:17.163704 containerd[1491]: time="2026-04-21T09:58:17.163260800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:17.163704 containerd[1491]: time="2026-04-21T09:58:17.163276720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:17.163704 containerd[1491]: time="2026-04-21T09:58:17.163371520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:17.183605 systemd[1]: Started cri-containerd-8afc20bd66111b330e82aca9e8796ef05094a6aaa3a6c3da659f331e2b85dd4c.scope - libcontainer container 8afc20bd66111b330e82aca9e8796ef05094a6aaa3a6c3da659f331e2b85dd4c. Apr 21 09:58:17.223042 containerd[1491]: time="2026-04-21T09:58:17.222894600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-bsmjj,Uid:5aa600b3-b023-4f80-8dcd-08b52cb64c08,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8afc20bd66111b330e82aca9e8796ef05094a6aaa3a6c3da659f331e2b85dd4c\"" Apr 21 09:58:17.226970 containerd[1491]: time="2026-04-21T09:58:17.226910440Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 09:58:19.020540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798242750.mount: Deactivated successfully. Apr 21 09:58:19.081109 kubelet[2531]: I0421 09:58:19.080478 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-zlz6l" podStartSLOduration=3.08046232 podStartE2EDuration="3.08046232s" podCreationTimestamp="2026-04-21 09:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:58:17.96584504 +0000 UTC m=+7.238936881" watchObservedRunningTime="2026-04-21 09:58:19.08046232 +0000 UTC m=+8.353554161" Apr 21 09:58:19.436527 containerd[1491]: time="2026-04-21T09:58:19.436480960Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:19.438500 containerd[1491]: time="2026-04-21T09:58:19.438445200Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 21 09:58:19.439902 containerd[1491]: time="2026-04-21T09:58:19.439865720Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:19.448078 containerd[1491]: time="2026-04-21T09:58:19.448027360Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:19.451064 containerd[1491]: time="2026-04-21T09:58:19.450906840Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.2239232s" Apr 21 09:58:19.451064 containerd[1491]: time="2026-04-21T09:58:19.450955360Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 21 09:58:19.456315 containerd[1491]: time="2026-04-21T09:58:19.456272960Z" level=info msg="CreateContainer within sandbox \"8afc20bd66111b330e82aca9e8796ef05094a6aaa3a6c3da659f331e2b85dd4c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 09:58:19.477778 containerd[1491]: time="2026-04-21T09:58:19.477564680Z" level=info msg="CreateContainer within sandbox \"8afc20bd66111b330e82aca9e8796ef05094a6aaa3a6c3da659f331e2b85dd4c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"047a8a4c361d8405c549cd91f1e0f8cf47799610126b6e10a62a300fa2c0bf40\"" Apr 21 09:58:19.480468 containerd[1491]: time="2026-04-21T09:58:19.478813480Z" level=info msg="StartContainer for \"047a8a4c361d8405c549cd91f1e0f8cf47799610126b6e10a62a300fa2c0bf40\"" Apr 21 09:58:19.509673 systemd[1]: Started cri-containerd-047a8a4c361d8405c549cd91f1e0f8cf47799610126b6e10a62a300fa2c0bf40.scope - libcontainer container 047a8a4c361d8405c549cd91f1e0f8cf47799610126b6e10a62a300fa2c0bf40. Apr 21 09:58:19.538569 containerd[1491]: time="2026-04-21T09:58:19.538517840Z" level=info msg="StartContainer for \"047a8a4c361d8405c549cd91f1e0f8cf47799610126b6e10a62a300fa2c0bf40\" returns successfully" Apr 21 09:58:19.969837 kubelet[2531]: I0421 09:58:19.969205 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-bsmjj" podStartSLOduration=1.74191488 podStartE2EDuration="3.96918488s" podCreationTimestamp="2026-04-21 09:58:16 +0000 UTC" firstStartedPulling="2026-04-21 09:58:17.22475472 +0000 UTC m=+6.497846561" lastFinishedPulling="2026-04-21 09:58:19.45202476 +0000 UTC m=+8.725116561" observedRunningTime="2026-04-21 09:58:19.96910448 +0000 UTC m=+9.242196321" watchObservedRunningTime="2026-04-21 09:58:19.96918488 +0000 UTC m=+9.242276761" Apr 21 09:58:21.566112 systemd-resolved[1344]: Clock change detected. Flushing caches. Apr 21 09:58:21.567125 systemd-timesyncd[1366]: Contacted time server 46.224.156.215:123 (2.flatcar.pool.ntp.org). Apr 21 09:58:21.568199 systemd-timesyncd[1366]: Initial clock synchronization to Tue 2026-04-21 09:58:21.566053 UTC. Apr 21 09:58:24.315114 sudo[1677]: pam_unix(sudo:session): session closed for user root Apr 21 09:58:24.335156 sshd[1674]: pam_unix(sshd:session): session closed for user core Apr 21 09:58:24.339066 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 09:58:24.339355 systemd[1]: session-7.scope: Consumed 5.639s CPU time, 153.4M memory peak, 0B memory swap peak. Apr 21 09:58:24.340112 systemd[1]: sshd@6-178.104.214.66:22-50.85.169.122:46126.service: Deactivated successfully. Apr 21 09:58:24.344366 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Apr 21 09:58:24.349698 systemd-logind[1464]: Removed session 7. Apr 21 09:58:32.005553 systemd[1]: Created slice kubepods-besteffort-pod2cfc35f7_37b8_483b_811f_cc8e8ef7736b.slice - libcontainer container kubepods-besteffort-pod2cfc35f7_37b8_483b_811f_cc8e8ef7736b.slice. Apr 21 09:58:32.016291 kubelet[2531]: I0421 09:58:32.016242 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbcv\" (UniqueName: \"kubernetes.io/projected/2cfc35f7-37b8-483b-811f-cc8e8ef7736b-kube-api-access-zlbcv\") pod \"calico-typha-85c99575db-7vg5z\" (UID: \"2cfc35f7-37b8-483b-811f-cc8e8ef7736b\") " pod="calico-system/calico-typha-85c99575db-7vg5z" Apr 21 09:58:32.016291 kubelet[2531]: I0421 09:58:32.016285 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2cfc35f7-37b8-483b-811f-cc8e8ef7736b-typha-certs\") pod \"calico-typha-85c99575db-7vg5z\" (UID: \"2cfc35f7-37b8-483b-811f-cc8e8ef7736b\") " pod="calico-system/calico-typha-85c99575db-7vg5z" Apr 21 09:58:32.016291 kubelet[2531]: I0421 09:58:32.016304 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cfc35f7-37b8-483b-811f-cc8e8ef7736b-tigera-ca-bundle\") pod \"calico-typha-85c99575db-7vg5z\" (UID: \"2cfc35f7-37b8-483b-811f-cc8e8ef7736b\") " pod="calico-system/calico-typha-85c99575db-7vg5z" Apr 21 09:58:32.092028 systemd[1]: Created slice kubepods-besteffort-pod85d2c489_06ef_471b_a8aa_5026bfa91a41.slice - libcontainer container kubepods-besteffort-pod85d2c489_06ef_471b_a8aa_5026bfa91a41.slice. Apr 21 09:58:32.116822 kubelet[2531]: I0421 09:58:32.116770 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-cni-bin-dir\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.116822 kubelet[2531]: I0421 09:58:32.116821 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/85d2c489-06ef-471b-a8aa-5026bfa91a41-node-certs\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117020 kubelet[2531]: I0421 09:58:32.116841 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-cni-log-dir\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117020 kubelet[2531]: I0421 09:58:32.116860 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85d2c489-06ef-471b-a8aa-5026bfa91a41-tigera-ca-bundle\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117020 kubelet[2531]: I0421 09:58:32.116901 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-xtables-lock\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117020 kubelet[2531]: I0421 09:58:32.116922 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-bpffs\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117020 kubelet[2531]: I0421 09:58:32.116942 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-nodeproc\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117143 kubelet[2531]: I0421 09:58:32.116959 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-policysync\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117143 kubelet[2531]: I0421 09:58:32.116979 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-flexvol-driver-host\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117143 kubelet[2531]: I0421 09:58:32.117018 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-var-lib-calico\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117143 kubelet[2531]: I0421 09:58:32.117047 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-sys-fs\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117143 kubelet[2531]: I0421 09:58:32.117061 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-var-run-calico\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117248 kubelet[2531]: I0421 09:58:32.117075 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6p7f\" (UniqueName: \"kubernetes.io/projected/85d2c489-06ef-471b-a8aa-5026bfa91a41-kube-api-access-q6p7f\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117248 kubelet[2531]: I0421 09:58:32.117088 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-cni-net-dir\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.117248 kubelet[2531]: I0421 09:58:32.117101 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85d2c489-06ef-471b-a8aa-5026bfa91a41-lib-modules\") pod \"calico-node-zg9t2\" (UID: \"85d2c489-06ef-471b-a8aa-5026bfa91a41\") " pod="calico-system/calico-node-zg9t2" Apr 21 09:58:32.201329 kubelet[2531]: E0421 09:58:32.201145 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:32.220215 kubelet[2531]: I0421 09:58:32.217962 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88a4ed3c-b6a4-4138-826b-621c4a7e3007-kubelet-dir\") pod \"csi-node-driver-h5t5f\" (UID: \"88a4ed3c-b6a4-4138-826b-621c4a7e3007\") " pod="calico-system/csi-node-driver-h5t5f" Apr 21 09:58:32.221619 kubelet[2531]: I0421 09:58:32.220430 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/88a4ed3c-b6a4-4138-826b-621c4a7e3007-socket-dir\") pod \"csi-node-driver-h5t5f\" (UID: \"88a4ed3c-b6a4-4138-826b-621c4a7e3007\") " pod="calico-system/csi-node-driver-h5t5f" Apr 21 09:58:32.221619 kubelet[2531]: I0421 09:58:32.220548 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/88a4ed3c-b6a4-4138-826b-621c4a7e3007-registration-dir\") pod \"csi-node-driver-h5t5f\" (UID: \"88a4ed3c-b6a4-4138-826b-621c4a7e3007\") " pod="calico-system/csi-node-driver-h5t5f" Apr 21 09:58:32.221619 kubelet[2531]: I0421 09:58:32.220603 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/88a4ed3c-b6a4-4138-826b-621c4a7e3007-varrun\") pod \"csi-node-driver-h5t5f\" (UID: \"88a4ed3c-b6a4-4138-826b-621c4a7e3007\") " pod="calico-system/csi-node-driver-h5t5f" Apr 21 09:58:32.221619 kubelet[2531]: I0421 09:58:32.220621 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8jb\" (UniqueName: \"kubernetes.io/projected/88a4ed3c-b6a4-4138-826b-621c4a7e3007-kube-api-access-bm8jb\") pod \"csi-node-driver-h5t5f\" (UID: \"88a4ed3c-b6a4-4138-826b-621c4a7e3007\") " pod="calico-system/csi-node-driver-h5t5f" Apr 21 09:58:32.263210 kubelet[2531]: E0421 09:58:32.263114 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.263210 kubelet[2531]: W0421 09:58:32.263140 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.263210 kubelet[2531]: E0421 09:58:32.263164 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.281556 update_engine[1465]: I20260421 09:58:32.281461 1465 update_attempter.cc:509] Updating boot flags... Apr 21 09:58:32.316650 containerd[1491]: time="2026-04-21T09:58:32.316234799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85c99575db-7vg5z,Uid:2cfc35f7-37b8-483b-811f-cc8e8ef7736b,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:32.324173 kubelet[2531]: E0421 09:58:32.324143 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.324525 kubelet[2531]: W0421 09:58:32.324342 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.324525 kubelet[2531]: E0421 09:58:32.324371 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.325188 kubelet[2531]: E0421 09:58:32.324994 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.325188 kubelet[2531]: W0421 09:58:32.325012 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.325188 kubelet[2531]: E0421 09:58:32.325027 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.325914 kubelet[2531]: E0421 09:58:32.325786 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.325914 kubelet[2531]: W0421 09:58:32.325807 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.325914 kubelet[2531]: E0421 09:58:32.325821 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.326517 kubelet[2531]: E0421 09:58:32.326499 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.327246 kubelet[2531]: W0421 09:58:32.327096 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.327246 kubelet[2531]: E0421 09:58:32.327120 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.327561 kubelet[2531]: E0421 09:58:32.327497 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.327975 kubelet[2531]: W0421 09:58:32.327652 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.327975 kubelet[2531]: E0421 09:58:32.327858 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.330057 kubelet[2531]: E0421 09:58:32.329279 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.330549 kubelet[2531]: W0421 09:58:32.330142 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.330549 kubelet[2531]: E0421 09:58:32.330166 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.331206 kubelet[2531]: E0421 09:58:32.331035 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.331206 kubelet[2531]: W0421 09:58:32.331050 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.331206 kubelet[2531]: E0421 09:58:32.331070 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.334814 kubelet[2531]: E0421 09:58:32.334601 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.334814 kubelet[2531]: W0421 09:58:32.334620 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.334814 kubelet[2531]: E0421 09:58:32.334636 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.336441 kubelet[2531]: E0421 09:58:32.336233 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.336441 kubelet[2531]: W0421 09:58:32.336255 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.336441 kubelet[2531]: E0421 09:58:32.336271 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.336739 kubelet[2531]: E0421 09:58:32.336725 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.336871 kubelet[2531]: W0421 09:58:32.336799 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.336871 kubelet[2531]: E0421 09:58:32.336817 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.344061 kubelet[2531]: E0421 09:58:32.341517 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.344061 kubelet[2531]: W0421 09:58:32.341541 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.344061 kubelet[2531]: E0421 09:58:32.341563 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.344061 kubelet[2531]: E0421 09:58:32.343729 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.344061 kubelet[2531]: W0421 09:58:32.343745 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.344061 kubelet[2531]: E0421 09:58:32.343762 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.344281 kubelet[2531]: E0421 09:58:32.344093 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.344281 kubelet[2531]: W0421 09:58:32.344104 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.344281 kubelet[2531]: E0421 09:58:32.344116 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.344281 kubelet[2531]: E0421 09:58:32.344259 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.344281 kubelet[2531]: W0421 09:58:32.344267 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.344281 kubelet[2531]: E0421 09:58:32.344275 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.346240 kubelet[2531]: E0421 09:58:32.344421 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.346240 kubelet[2531]: W0421 09:58:32.344430 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.346240 kubelet[2531]: E0421 09:58:32.344438 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.346240 kubelet[2531]: E0421 09:58:32.344608 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.346240 kubelet[2531]: W0421 09:58:32.344617 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.346240 kubelet[2531]: E0421 09:58:32.344625 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.346240 kubelet[2531]: E0421 09:58:32.344833 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.346240 kubelet[2531]: W0421 09:58:32.344842 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.346240 kubelet[2531]: E0421 09:58:32.344851 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.346240 kubelet[2531]: E0421 09:58:32.345003 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.346484 kubelet[2531]: W0421 09:58:32.345012 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.346484 kubelet[2531]: E0421 09:58:32.345020 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.346484 kubelet[2531]: E0421 09:58:32.345144 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.346484 kubelet[2531]: W0421 09:58:32.345151 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.346484 kubelet[2531]: E0421 09:58:32.345159 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.352461 kubelet[2531]: E0421 09:58:32.347133 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.352461 kubelet[2531]: W0421 09:58:32.347156 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.352461 kubelet[2531]: E0421 09:58:32.347169 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.352461 kubelet[2531]: E0421 09:58:32.347604 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.352461 kubelet[2531]: W0421 09:58:32.347615 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.352461 kubelet[2531]: E0421 09:58:32.347625 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.352461 kubelet[2531]: E0421 09:58:32.347766 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.352461 kubelet[2531]: W0421 09:58:32.347774 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.352461 kubelet[2531]: E0421 09:58:32.347781 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.352461 kubelet[2531]: E0421 09:58:32.347967 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.352786 kubelet[2531]: W0421 09:58:32.347977 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.352786 kubelet[2531]: E0421 09:58:32.347986 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.352786 kubelet[2531]: E0421 09:58:32.349263 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.352786 kubelet[2531]: W0421 09:58:32.349276 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.352786 kubelet[2531]: E0421 09:58:32.349288 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.352786 kubelet[2531]: E0421 09:58:32.349480 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.352786 kubelet[2531]: W0421 09:58:32.349489 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.352786 kubelet[2531]: E0421 09:58:32.349498 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.399062 kubelet[2531]: E0421 09:58:32.399023 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:32.399062 kubelet[2531]: W0421 09:58:32.399054 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:32.399508 kubelet[2531]: E0421 09:58:32.399082 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:32.404435 containerd[1491]: time="2026-04-21T09:58:32.404395479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zg9t2,Uid:85d2c489-06ef-471b-a8aa-5026bfa91a41,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:32.414496 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2975) Apr 21 09:58:32.446745 containerd[1491]: time="2026-04-21T09:58:32.436034959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:32.446745 containerd[1491]: time="2026-04-21T09:58:32.436096679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:32.446745 containerd[1491]: time="2026-04-21T09:58:32.436108399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:32.446745 containerd[1491]: time="2026-04-21T09:58:32.436191919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:32.528119 systemd[1]: Started cri-containerd-fd2be6828f6ce1142ad1e4d4aaeb37ae70e45d675fdb74db5368bef798048508.scope - libcontainer container fd2be6828f6ce1142ad1e4d4aaeb37ae70e45d675fdb74db5368bef798048508. Apr 21 09:58:32.547716 containerd[1491]: time="2026-04-21T09:58:32.543511599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:32.547716 containerd[1491]: time="2026-04-21T09:58:32.544290399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:32.547716 containerd[1491]: time="2026-04-21T09:58:32.544304999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:32.547716 containerd[1491]: time="2026-04-21T09:58:32.544451879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:32.589551 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2975) Apr 21 09:58:32.627593 systemd[1]: Started cri-containerd-f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a.scope - libcontainer container f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a. Apr 21 09:58:32.675853 containerd[1491]: time="2026-04-21T09:58:32.675795719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85c99575db-7vg5z,Uid:2cfc35f7-37b8-483b-811f-cc8e8ef7736b,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd2be6828f6ce1142ad1e4d4aaeb37ae70e45d675fdb74db5368bef798048508\"" Apr 21 09:58:32.678178 containerd[1491]: time="2026-04-21T09:58:32.678075039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 09:58:32.680721 containerd[1491]: time="2026-04-21T09:58:32.680606439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zg9t2,Uid:85d2c489-06ef-471b-a8aa-5026bfa91a41,Namespace:calico-system,Attempt:0,} returns sandbox id \"f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a\"" Apr 21 09:58:33.302613 kubelet[2531]: E0421 09:58:33.302243 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:34.461787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349215011.mount: Deactivated successfully. Apr 21 09:58:35.306132 kubelet[2531]: E0421 09:58:35.306091 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:35.486680 containerd[1491]: time="2026-04-21T09:58:35.486607039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:35.488090 containerd[1491]: time="2026-04-21T09:58:35.488043119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 21 09:58:35.489023 containerd[1491]: time="2026-04-21T09:58:35.488699159Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:35.492759 containerd[1491]: time="2026-04-21T09:58:35.492702079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:35.493892 containerd[1491]: time="2026-04-21T09:58:35.493834679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.81570844s" Apr 21 09:58:35.493960 containerd[1491]: time="2026-04-21T09:58:35.493895359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 21 09:58:35.496649 containerd[1491]: time="2026-04-21T09:58:35.496474839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 09:58:35.514257 containerd[1491]: time="2026-04-21T09:58:35.514216879Z" level=info msg="CreateContainer within sandbox \"fd2be6828f6ce1142ad1e4d4aaeb37ae70e45d675fdb74db5368bef798048508\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 09:58:35.535812 containerd[1491]: time="2026-04-21T09:58:35.535481359Z" level=info msg="CreateContainer within sandbox \"fd2be6828f6ce1142ad1e4d4aaeb37ae70e45d675fdb74db5368bef798048508\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6049b9bda2d7cc0a914f032218d4dd9e6f43eb7e1d69378ff5695cef0e40911c\"" Apr 21 09:58:35.543823 containerd[1491]: time="2026-04-21T09:58:35.543776839Z" level=info msg="StartContainer for \"6049b9bda2d7cc0a914f032218d4dd9e6f43eb7e1d69378ff5695cef0e40911c\"" Apr 21 09:58:35.571187 systemd[1]: Started cri-containerd-6049b9bda2d7cc0a914f032218d4dd9e6f43eb7e1d69378ff5695cef0e40911c.scope - libcontainer container 6049b9bda2d7cc0a914f032218d4dd9e6f43eb7e1d69378ff5695cef0e40911c. Apr 21 09:58:35.613332 containerd[1491]: time="2026-04-21T09:58:35.613236999Z" level=info msg="StartContainer for \"6049b9bda2d7cc0a914f032218d4dd9e6f43eb7e1d69378ff5695cef0e40911c\" returns successfully" Apr 21 09:58:36.447140 kubelet[2531]: I0421 09:58:36.446423 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-85c99575db-7vg5z" podStartSLOduration=2.628593919 podStartE2EDuration="5.446410879s" podCreationTimestamp="2026-04-21 09:58:31 +0000 UTC" firstStartedPulling="2026-04-21 09:58:32.677692359 +0000 UTC m=+21.527823681" lastFinishedPulling="2026-04-21 09:58:35.495509319 +0000 UTC m=+24.345640641" observedRunningTime="2026-04-21 09:58:36.445743559 +0000 UTC m=+25.295874881" watchObservedRunningTime="2026-04-21 09:58:36.446410879 +0000 UTC m=+25.296542201" Apr 21 09:58:36.528881 kubelet[2531]: E0421 09:58:36.528662 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.528881 kubelet[2531]: W0421 09:58:36.528702 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.528881 kubelet[2531]: E0421 09:58:36.528775 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.529758 kubelet[2531]: E0421 09:58:36.529563 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.529758 kubelet[2531]: W0421 09:58:36.529587 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.529758 kubelet[2531]: E0421 09:58:36.529609 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.530494 kubelet[2531]: E0421 09:58:36.530181 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.530494 kubelet[2531]: W0421 09:58:36.530201 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.530494 kubelet[2531]: E0421 09:58:36.530221 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.531290 kubelet[2531]: E0421 09:58:36.531017 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.531290 kubelet[2531]: W0421 09:58:36.531042 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.531290 kubelet[2531]: E0421 09:58:36.531064 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.531979 kubelet[2531]: E0421 09:58:36.531831 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.531979 kubelet[2531]: W0421 09:58:36.531854 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.531979 kubelet[2531]: E0421 09:58:36.531901 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.532663 kubelet[2531]: E0421 09:58:36.532551 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.532663 kubelet[2531]: W0421 09:58:36.532565 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.532663 kubelet[2531]: E0421 09:58:36.532576 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.533007 kubelet[2531]: E0421 09:58:36.532747 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.533007 kubelet[2531]: W0421 09:58:36.532757 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.533007 kubelet[2531]: E0421 09:58:36.532769 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.533255 kubelet[2531]: E0421 09:58:36.533150 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.533255 kubelet[2531]: W0421 09:58:36.533161 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.533255 kubelet[2531]: E0421 09:58:36.533171 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.533666 kubelet[2531]: E0421 09:58:36.533568 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.533666 kubelet[2531]: W0421 09:58:36.533580 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.533666 kubelet[2531]: E0421 09:58:36.533590 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.533908 kubelet[2531]: E0421 09:58:36.533835 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.533908 kubelet[2531]: W0421 09:58:36.533846 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.533908 kubelet[2531]: E0421 09:58:36.533855 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.534200 kubelet[2531]: E0421 09:58:36.534144 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.534200 kubelet[2531]: W0421 09:58:36.534155 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.534200 kubelet[2531]: E0421 09:58:36.534164 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.534604 kubelet[2531]: E0421 09:58:36.534521 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.534604 kubelet[2531]: W0421 09:58:36.534533 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.534604 kubelet[2531]: E0421 09:58:36.534543 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.534977 kubelet[2531]: E0421 09:58:36.534881 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.534977 kubelet[2531]: W0421 09:58:36.534893 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.534977 kubelet[2531]: E0421 09:58:36.534903 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.535186 kubelet[2531]: E0421 09:58:36.535132 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.535186 kubelet[2531]: W0421 09:58:36.535142 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.535186 kubelet[2531]: E0421 09:58:36.535151 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.535513 kubelet[2531]: E0421 09:58:36.535437 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.535513 kubelet[2531]: W0421 09:58:36.535449 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.535513 kubelet[2531]: E0421 09:58:36.535458 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.584178 kubelet[2531]: E0421 09:58:36.584137 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.584677 kubelet[2531]: W0421 09:58:36.584440 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.584677 kubelet[2531]: E0421 09:58:36.584552 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.585584 kubelet[2531]: E0421 09:58:36.585397 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.585584 kubelet[2531]: W0421 09:58:36.585415 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.585584 kubelet[2531]: E0421 09:58:36.585431 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.586013 kubelet[2531]: E0421 09:58:36.585949 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.586013 kubelet[2531]: W0421 09:58:36.585962 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.586013 kubelet[2531]: E0421 09:58:36.585989 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.587356 kubelet[2531]: E0421 09:58:36.586909 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.587356 kubelet[2531]: W0421 09:58:36.586929 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.587356 kubelet[2531]: E0421 09:58:36.586943 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.588098 kubelet[2531]: E0421 09:58:36.587987 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.588098 kubelet[2531]: W0421 09:58:36.588018 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.588098 kubelet[2531]: E0421 09:58:36.588035 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.588544 kubelet[2531]: E0421 09:58:36.588445 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.588544 kubelet[2531]: W0421 09:58:36.588458 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.588544 kubelet[2531]: E0421 09:58:36.588468 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.588963 kubelet[2531]: E0421 09:58:36.588842 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.588963 kubelet[2531]: W0421 09:58:36.588854 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.588963 kubelet[2531]: E0421 09:58:36.588881 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.589177 kubelet[2531]: E0421 09:58:36.589145 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.589177 kubelet[2531]: W0421 09:58:36.589157 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.589177 kubelet[2531]: E0421 09:58:36.589166 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.589590 kubelet[2531]: E0421 09:58:36.589523 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.589590 kubelet[2531]: W0421 09:58:36.589535 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.589590 kubelet[2531]: E0421 09:58:36.589545 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.590144 kubelet[2531]: E0421 09:58:36.590129 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.590315 kubelet[2531]: W0421 09:58:36.590210 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.590315 kubelet[2531]: E0421 09:58:36.590228 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.590474 kubelet[2531]: E0421 09:58:36.590461 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.590702 kubelet[2531]: W0421 09:58:36.590539 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.590702 kubelet[2531]: E0421 09:58:36.590555 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.590873 kubelet[2531]: E0421 09:58:36.590849 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.590928 kubelet[2531]: W0421 09:58:36.590918 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.590975 kubelet[2531]: E0421 09:58:36.590966 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.591432 kubelet[2531]: E0421 09:58:36.591416 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.591606 kubelet[2531]: W0421 09:58:36.591510 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.591606 kubelet[2531]: E0421 09:58:36.591527 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.591912 kubelet[2531]: E0421 09:58:36.591898 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.592059 kubelet[2531]: W0421 09:58:36.591966 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.592059 kubelet[2531]: E0421 09:58:36.591981 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.592323 kubelet[2531]: E0421 09:58:36.592309 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.592632 kubelet[2531]: W0421 09:58:36.592618 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.592711 kubelet[2531]: E0421 09:58:36.592700 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.593044 kubelet[2531]: E0421 09:58:36.593029 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.593125 kubelet[2531]: W0421 09:58:36.593114 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.593176 kubelet[2531]: E0421 09:58:36.593166 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.593595 kubelet[2531]: E0421 09:58:36.593582 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.593703 kubelet[2531]: W0421 09:58:36.593672 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.593703 kubelet[2531]: E0421 09:58:36.593688 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:36.594015 kubelet[2531]: E0421 09:58:36.593975 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 09:58:36.594015 kubelet[2531]: W0421 09:58:36.593987 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 09:58:36.594015 kubelet[2531]: E0421 09:58:36.593996 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 09:58:37.262636 containerd[1491]: time="2026-04-21T09:58:37.261507599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:37.262636 containerd[1491]: time="2026-04-21T09:58:37.262567399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 21 09:58:37.263467 containerd[1491]: time="2026-04-21T09:58:37.263408559Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:37.267092 containerd[1491]: time="2026-04-21T09:58:37.267056519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:37.268456 containerd[1491]: time="2026-04-21T09:58:37.268013239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.771502s" Apr 21 09:58:37.268456 containerd[1491]: time="2026-04-21T09:58:37.268358159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 21 09:58:37.276150 containerd[1491]: time="2026-04-21T09:58:37.276105719Z" level=info msg="CreateContainer within sandbox \"f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 09:58:37.297974 containerd[1491]: time="2026-04-21T09:58:37.297814439Z" level=info msg="CreateContainer within sandbox \"f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"111aa8641b1b1e606835cf654cd747f0d8fe588d56d6dfc47a1700c231776a6d\"" Apr 21 09:58:37.298888 containerd[1491]: time="2026-04-21T09:58:37.298632279Z" level=info msg="StartContainer for \"111aa8641b1b1e606835cf654cd747f0d8fe588d56d6dfc47a1700c231776a6d\"" Apr 21 09:58:37.303913 kubelet[2531]: E0421 09:58:37.303833 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:37.341801 systemd[1]: Started cri-containerd-111aa8641b1b1e606835cf654cd747f0d8fe588d56d6dfc47a1700c231776a6d.scope - libcontainer container 111aa8641b1b1e606835cf654cd747f0d8fe588d56d6dfc47a1700c231776a6d. Apr 21 09:58:37.377246 containerd[1491]: time="2026-04-21T09:58:37.377194679Z" level=info msg="StartContainer for \"111aa8641b1b1e606835cf654cd747f0d8fe588d56d6dfc47a1700c231776a6d\" returns successfully" Apr 21 09:58:37.394180 systemd[1]: cri-containerd-111aa8641b1b1e606835cf654cd747f0d8fe588d56d6dfc47a1700c231776a6d.scope: Deactivated successfully. Apr 21 09:58:37.418529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-111aa8641b1b1e606835cf654cd747f0d8fe588d56d6dfc47a1700c231776a6d-rootfs.mount: Deactivated successfully. Apr 21 09:58:37.526593 containerd[1491]: time="2026-04-21T09:58:37.526285919Z" level=info msg="shim disconnected" id=111aa8641b1b1e606835cf654cd747f0d8fe588d56d6dfc47a1700c231776a6d namespace=k8s.io Apr 21 09:58:37.526593 containerd[1491]: time="2026-04-21T09:58:37.526341679Z" level=warning msg="cleaning up after shim disconnected" id=111aa8641b1b1e606835cf654cd747f0d8fe588d56d6dfc47a1700c231776a6d namespace=k8s.io Apr 21 09:58:37.526593 containerd[1491]: time="2026-04-21T09:58:37.526350519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:38.445312 containerd[1491]: time="2026-04-21T09:58:38.445269319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 09:58:39.304437 kubelet[2531]: E0421 09:58:39.302607 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:41.303993 kubelet[2531]: E0421 09:58:41.303165 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:43.303416 kubelet[2531]: E0421 09:58:43.302719 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:44.791125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440631445.mount: Deactivated successfully. Apr 21 09:58:44.821320 containerd[1491]: time="2026-04-21T09:58:44.821226999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:44.823105 containerd[1491]: time="2026-04-21T09:58:44.823043599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 21 09:58:44.824081 containerd[1491]: time="2026-04-21T09:58:44.823713439Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:44.826559 containerd[1491]: time="2026-04-21T09:58:44.826516319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:44.827581 containerd[1491]: time="2026-04-21T09:58:44.827543839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.38223328s" Apr 21 09:58:44.827581 containerd[1491]: time="2026-04-21T09:58:44.827579719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 21 09:58:44.833948 containerd[1491]: time="2026-04-21T09:58:44.833898319Z" level=info msg="CreateContainer within sandbox \"f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 09:58:44.853991 containerd[1491]: time="2026-04-21T09:58:44.853765359Z" level=info msg="CreateContainer within sandbox \"f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2\"" Apr 21 09:58:44.855451 containerd[1491]: time="2026-04-21T09:58:44.854520439Z" level=info msg="StartContainer for \"d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2\"" Apr 21 09:58:44.886881 systemd[1]: run-containerd-runc-k8s.io-d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2-runc.JcSXzX.mount: Deactivated successfully. Apr 21 09:58:44.898122 systemd[1]: Started cri-containerd-d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2.scope - libcontainer container d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2. Apr 21 09:58:44.932131 containerd[1491]: time="2026-04-21T09:58:44.932018599Z" level=info msg="StartContainer for \"d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2\" returns successfully" Apr 21 09:58:45.034160 systemd[1]: cri-containerd-d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2.scope: Deactivated successfully. Apr 21 09:58:45.224665 containerd[1491]: time="2026-04-21T09:58:45.224529879Z" level=info msg="shim disconnected" id=d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2 namespace=k8s.io Apr 21 09:58:45.224665 containerd[1491]: time="2026-04-21T09:58:45.224612559Z" level=warning msg="cleaning up after shim disconnected" id=d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2 namespace=k8s.io Apr 21 09:58:45.224665 containerd[1491]: time="2026-04-21T09:58:45.224626279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:45.303080 kubelet[2531]: E0421 09:58:45.302376 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:45.468518 containerd[1491]: time="2026-04-21T09:58:45.468264719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 09:58:45.793314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d01a949aea7407b95989e04621e5e438886bb2da138d5d9fdea43ad18e42e1f2-rootfs.mount: Deactivated successfully. Apr 21 09:58:47.302852 kubelet[2531]: E0421 09:58:47.302744 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:49.303434 kubelet[2531]: E0421 09:58:49.303116 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:49.886226 containerd[1491]: time="2026-04-21T09:58:49.886169559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:49.887687 containerd[1491]: time="2026-04-21T09:58:49.887433359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 21 09:58:49.888943 containerd[1491]: time="2026-04-21T09:58:49.888706759Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:49.893140 containerd[1491]: time="2026-04-21T09:58:49.891534959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:49.893140 containerd[1491]: time="2026-04-21T09:58:49.892792639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 4.42448436s" Apr 21 09:58:49.893140 containerd[1491]: time="2026-04-21T09:58:49.893026479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 21 09:58:49.899514 containerd[1491]: time="2026-04-21T09:58:49.899460679Z" level=info msg="CreateContainer within sandbox \"f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 09:58:49.923679 containerd[1491]: time="2026-04-21T09:58:49.923614959Z" level=info msg="CreateContainer within sandbox \"f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"855a4fa71212f8dadeb7b758980faef1174e6189157ab7e12ce557c83e3ea393\"" Apr 21 09:58:49.925620 containerd[1491]: time="2026-04-21T09:58:49.924920479Z" level=info msg="StartContainer for \"855a4fa71212f8dadeb7b758980faef1174e6189157ab7e12ce557c83e3ea393\"" Apr 21 09:58:49.956592 systemd[1]: Started cri-containerd-855a4fa71212f8dadeb7b758980faef1174e6189157ab7e12ce557c83e3ea393.scope - libcontainer container 855a4fa71212f8dadeb7b758980faef1174e6189157ab7e12ce557c83e3ea393. Apr 21 09:58:49.987600 containerd[1491]: time="2026-04-21T09:58:49.987537519Z" level=info msg="StartContainer for \"855a4fa71212f8dadeb7b758980faef1174e6189157ab7e12ce557c83e3ea393\" returns successfully" Apr 21 09:58:50.567723 containerd[1491]: time="2026-04-21T09:58:50.567675319Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 09:58:50.570752 systemd[1]: cri-containerd-855a4fa71212f8dadeb7b758980faef1174e6189157ab7e12ce557c83e3ea393.scope: Deactivated successfully. Apr 21 09:58:50.599067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-855a4fa71212f8dadeb7b758980faef1174e6189157ab7e12ce557c83e3ea393-rootfs.mount: Deactivated successfully. Apr 21 09:58:50.652484 kubelet[2531]: I0421 09:58:50.652162 2531 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 09:58:50.686427 containerd[1491]: time="2026-04-21T09:58:50.683785559Z" level=info msg="shim disconnected" id=855a4fa71212f8dadeb7b758980faef1174e6189157ab7e12ce557c83e3ea393 namespace=k8s.io Apr 21 09:58:50.686427 containerd[1491]: time="2026-04-21T09:58:50.683869519Z" level=warning msg="cleaning up after shim disconnected" id=855a4fa71212f8dadeb7b758980faef1174e6189157ab7e12ce557c83e3ea393 namespace=k8s.io Apr 21 09:58:50.686427 containerd[1491]: time="2026-04-21T09:58:50.683879919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 09:58:50.704074 systemd[1]: Created slice kubepods-burstable-podda192526_415e_407b_a486_c9ee15869745.slice - libcontainer container kubepods-burstable-podda192526_415e_407b_a486_c9ee15869745.slice. Apr 21 09:58:50.743628 systemd[1]: Created slice kubepods-besteffort-pod9d2eb78e_0b26_4ad7_9e36_2ec441961e5e.slice - libcontainer container kubepods-besteffort-pod9d2eb78e_0b26_4ad7_9e36_2ec441961e5e.slice. Apr 21 09:58:50.762725 systemd[1]: Created slice kubepods-besteffort-pode864a05e_cd95_4bff_bcff_cd119ea67d7b.slice - libcontainer container kubepods-besteffort-pode864a05e_cd95_4bff_bcff_cd119ea67d7b.slice. Apr 21 09:58:50.776225 systemd[1]: Created slice kubepods-burstable-podf14c36e9_166a_4256_9de6_cdefe0504d6e.slice - libcontainer container kubepods-burstable-podf14c36e9_166a_4256_9de6_cdefe0504d6e.slice. Apr 21 09:58:50.783175 kubelet[2531]: I0421 09:58:50.782911 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-backend-key-pair\") pod \"whisker-5c9bb4cfc-2r98m\" (UID: \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\") " pod="calico-system/whisker-5c9bb4cfc-2r98m" Apr 21 09:58:50.784316 kubelet[2531]: I0421 09:58:50.783676 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da192526-415e-407b-a486-c9ee15869745-config-volume\") pod \"coredns-7d764666f9-cp2pp\" (UID: \"da192526-415e-407b-a486-c9ee15869745\") " pod="kube-system/coredns-7d764666f9-cp2pp" Apr 21 09:58:50.784570 kubelet[2531]: I0421 09:58:50.784511 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e864a05e-cd95-4bff-bcff-cd119ea67d7b-tigera-ca-bundle\") pod \"calico-kube-controllers-77f77bf9b9-58b2r\" (UID: \"e864a05e-cd95-4bff-bcff-cd119ea67d7b\") " pod="calico-system/calico-kube-controllers-77f77bf9b9-58b2r" Apr 21 09:58:50.784992 kubelet[2531]: I0421 09:58:50.784549 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-ca-bundle\") pod \"whisker-5c9bb4cfc-2r98m\" (UID: \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\") " pod="calico-system/whisker-5c9bb4cfc-2r98m" Apr 21 09:58:50.784992 kubelet[2531]: I0421 09:58:50.784763 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-nginx-config\") pod \"whisker-5c9bb4cfc-2r98m\" (UID: \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\") " pod="calico-system/whisker-5c9bb4cfc-2r98m" Apr 21 09:58:50.784992 kubelet[2531]: I0421 09:58:50.784782 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8z6\" (UniqueName: \"kubernetes.io/projected/e864a05e-cd95-4bff-bcff-cd119ea67d7b-kube-api-access-xp8z6\") pod \"calico-kube-controllers-77f77bf9b9-58b2r\" (UID: \"e864a05e-cd95-4bff-bcff-cd119ea67d7b\") " pod="calico-system/calico-kube-controllers-77f77bf9b9-58b2r" Apr 21 09:58:50.786068 kubelet[2531]: I0421 09:58:50.785874 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwjv9\" (UniqueName: \"kubernetes.io/projected/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-kube-api-access-jwjv9\") pod \"whisker-5c9bb4cfc-2r98m\" (UID: \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\") " pod="calico-system/whisker-5c9bb4cfc-2r98m" Apr 21 09:58:50.787333 kubelet[2531]: I0421 09:58:50.786457 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh25q\" (UniqueName: \"kubernetes.io/projected/da192526-415e-407b-a486-c9ee15869745-kube-api-access-wh25q\") pod \"coredns-7d764666f9-cp2pp\" (UID: \"da192526-415e-407b-a486-c9ee15869745\") " pod="kube-system/coredns-7d764666f9-cp2pp" Apr 21 09:58:50.787333 kubelet[2531]: I0421 09:58:50.786495 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f14c36e9-166a-4256-9de6-cdefe0504d6e-config-volume\") pod \"coredns-7d764666f9-gbnk5\" (UID: \"f14c36e9-166a-4256-9de6-cdefe0504d6e\") " pod="kube-system/coredns-7d764666f9-gbnk5" Apr 21 09:58:50.787333 kubelet[2531]: I0421 09:58:50.786514 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrpkz\" (UniqueName: \"kubernetes.io/projected/f14c36e9-166a-4256-9de6-cdefe0504d6e-kube-api-access-qrpkz\") pod \"coredns-7d764666f9-gbnk5\" (UID: \"f14c36e9-166a-4256-9de6-cdefe0504d6e\") " pod="kube-system/coredns-7d764666f9-gbnk5" Apr 21 09:58:50.787205 systemd[1]: Created slice kubepods-besteffort-podf066e964_91d8_4473_bead_51ea9c76986c.slice - libcontainer container kubepods-besteffort-podf066e964_91d8_4473_bead_51ea9c76986c.slice. Apr 21 09:58:50.799114 systemd[1]: Created slice kubepods-besteffort-pod6fb1502d_d737_4fec_9a69_7870084d206a.slice - libcontainer container kubepods-besteffort-pod6fb1502d_d737_4fec_9a69_7870084d206a.slice. Apr 21 09:58:50.806240 systemd[1]: Created slice kubepods-besteffort-pod5cf12a5d_710e_475b_9da9_806fe2f83ca0.slice - libcontainer container kubepods-besteffort-pod5cf12a5d_710e_475b_9da9_806fe2f83ca0.slice. Apr 21 09:58:50.888556 kubelet[2531]: I0421 09:58:50.887531 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf12a5d-710e-475b-9da9-806fe2f83ca0-config\") pod \"goldmane-9f7667bb8-gc82f\" (UID: \"5cf12a5d-710e-475b-9da9-806fe2f83ca0\") " pod="calico-system/goldmane-9f7667bb8-gc82f" Apr 21 09:58:50.888556 kubelet[2531]: I0421 09:58:50.887626 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5cf12a5d-710e-475b-9da9-806fe2f83ca0-goldmane-key-pair\") pod \"goldmane-9f7667bb8-gc82f\" (UID: \"5cf12a5d-710e-475b-9da9-806fe2f83ca0\") " pod="calico-system/goldmane-9f7667bb8-gc82f" Apr 21 09:58:50.888556 kubelet[2531]: I0421 09:58:50.887651 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9hz\" (UniqueName: \"kubernetes.io/projected/5cf12a5d-710e-475b-9da9-806fe2f83ca0-kube-api-access-pz9hz\") pod \"goldmane-9f7667bb8-gc82f\" (UID: \"5cf12a5d-710e-475b-9da9-806fe2f83ca0\") " pod="calico-system/goldmane-9f7667bb8-gc82f" Apr 21 09:58:50.888556 kubelet[2531]: I0421 09:58:50.887671 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrvc\" (UniqueName: \"kubernetes.io/projected/f066e964-91d8-4473-bead-51ea9c76986c-kube-api-access-glrvc\") pod \"calico-apiserver-5459f6b57d-px5v8\" (UID: \"f066e964-91d8-4473-bead-51ea9c76986c\") " pod="calico-system/calico-apiserver-5459f6b57d-px5v8" Apr 21 09:58:50.888556 kubelet[2531]: I0421 09:58:50.887719 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cf12a5d-710e-475b-9da9-806fe2f83ca0-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-gc82f\" (UID: \"5cf12a5d-710e-475b-9da9-806fe2f83ca0\") " pod="calico-system/goldmane-9f7667bb8-gc82f" Apr 21 09:58:50.888875 kubelet[2531]: I0421 09:58:50.887740 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f066e964-91d8-4473-bead-51ea9c76986c-calico-apiserver-certs\") pod \"calico-apiserver-5459f6b57d-px5v8\" (UID: \"f066e964-91d8-4473-bead-51ea9c76986c\") " pod="calico-system/calico-apiserver-5459f6b57d-px5v8" Apr 21 09:58:50.888875 kubelet[2531]: I0421 09:58:50.887830 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6fb1502d-d737-4fec-9a69-7870084d206a-calico-apiserver-certs\") pod \"calico-apiserver-5459f6b57d-nbrdx\" (UID: \"6fb1502d-d737-4fec-9a69-7870084d206a\") " pod="calico-system/calico-apiserver-5459f6b57d-nbrdx" Apr 21 09:58:50.888875 kubelet[2531]: I0421 09:58:50.887854 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45q6c\" (UniqueName: \"kubernetes.io/projected/6fb1502d-d737-4fec-9a69-7870084d206a-kube-api-access-45q6c\") pod \"calico-apiserver-5459f6b57d-nbrdx\" (UID: \"6fb1502d-d737-4fec-9a69-7870084d206a\") " pod="calico-system/calico-apiserver-5459f6b57d-nbrdx" Apr 21 09:58:51.029625 containerd[1491]: time="2026-04-21T09:58:51.029555119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cp2pp,Uid:da192526-415e-407b-a486-c9ee15869745,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:51.054306 containerd[1491]: time="2026-04-21T09:58:51.054261639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9bb4cfc-2r98m,Uid:9d2eb78e-0b26-4ad7-9e36-2ec441961e5e,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:51.071573 containerd[1491]: time="2026-04-21T09:58:51.071503439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f77bf9b9-58b2r,Uid:e864a05e-cd95-4bff-bcff-cd119ea67d7b,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:51.087125 containerd[1491]: time="2026-04-21T09:58:51.087079599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gbnk5,Uid:f14c36e9-166a-4256-9de6-cdefe0504d6e,Namespace:kube-system,Attempt:0,}" Apr 21 09:58:51.094899 containerd[1491]: time="2026-04-21T09:58:51.094604159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5459f6b57d-px5v8,Uid:f066e964-91d8-4473-bead-51ea9c76986c,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:51.107547 containerd[1491]: time="2026-04-21T09:58:51.107509079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5459f6b57d-nbrdx,Uid:6fb1502d-d737-4fec-9a69-7870084d206a,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:51.112973 containerd[1491]: time="2026-04-21T09:58:51.112921799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-gc82f,Uid:5cf12a5d-710e-475b-9da9-806fe2f83ca0,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:51.178178 containerd[1491]: time="2026-04-21T09:58:51.178057119Z" level=error msg="Failed to destroy network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.179577 containerd[1491]: time="2026-04-21T09:58:51.179528759Z" level=error msg="encountered an error cleaning up failed sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.179701 containerd[1491]: time="2026-04-21T09:58:51.179610919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cp2pp,Uid:da192526-415e-407b-a486-c9ee15869745,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.180442 containerd[1491]: time="2026-04-21T09:58:51.179758879Z" level=error msg="Failed to destroy network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.180617 kubelet[2531]: E0421 09:58:51.179950 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.180617 kubelet[2531]: E0421 09:58:51.180016 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cp2pp" Apr 21 09:58:51.180617 kubelet[2531]: E0421 09:58:51.180043 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cp2pp" Apr 21 09:58:51.180721 kubelet[2531]: E0421 09:58:51.180102 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-cp2pp_kube-system(da192526-415e-407b-a486-c9ee15869745)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-cp2pp_kube-system(da192526-415e-407b-a486-c9ee15869745)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-cp2pp" podUID="da192526-415e-407b-a486-c9ee15869745" Apr 21 09:58:51.181817 containerd[1491]: time="2026-04-21T09:58:51.181745439Z" level=error msg="encountered an error cleaning up failed sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.181895 containerd[1491]: time="2026-04-21T09:58:51.181863479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9bb4cfc-2r98m,Uid:9d2eb78e-0b26-4ad7-9e36-2ec441961e5e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.182225 kubelet[2531]: E0421 09:58:51.182156 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.182477 kubelet[2531]: E0421 09:58:51.182376 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c9bb4cfc-2r98m" Apr 21 09:58:51.182477 kubelet[2531]: E0421 09:58:51.182433 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c9bb4cfc-2r98m" Apr 21 09:58:51.182648 kubelet[2531]: E0421 09:58:51.182589 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c9bb4cfc-2r98m_calico-system(9d2eb78e-0b26-4ad7-9e36-2ec441961e5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c9bb4cfc-2r98m_calico-system(9d2eb78e-0b26-4ad7-9e36-2ec441961e5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c9bb4cfc-2r98m" podUID="9d2eb78e-0b26-4ad7-9e36-2ec441961e5e" Apr 21 09:58:51.287579 containerd[1491]: time="2026-04-21T09:58:51.287443839Z" level=error msg="Failed to destroy network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.287858 containerd[1491]: time="2026-04-21T09:58:51.287784479Z" level=error msg="encountered an error cleaning up failed sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.287904 containerd[1491]: time="2026-04-21T09:58:51.287874239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f77bf9b9-58b2r,Uid:e864a05e-cd95-4bff-bcff-cd119ea67d7b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.289263 kubelet[2531]: E0421 09:58:51.288122 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.289263 kubelet[2531]: E0421 09:58:51.288175 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f77bf9b9-58b2r" Apr 21 09:58:51.289263 kubelet[2531]: E0421 09:58:51.288196 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f77bf9b9-58b2r" Apr 21 09:58:51.289458 kubelet[2531]: E0421 09:58:51.288261 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77f77bf9b9-58b2r_calico-system(e864a05e-cd95-4bff-bcff-cd119ea67d7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77f77bf9b9-58b2r_calico-system(e864a05e-cd95-4bff-bcff-cd119ea67d7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77f77bf9b9-58b2r" podUID="e864a05e-cd95-4bff-bcff-cd119ea67d7b" Apr 21 09:58:51.312399 systemd[1]: Created slice kubepods-besteffort-pod88a4ed3c_b6a4_4138_826b_621c4a7e3007.slice - libcontainer container kubepods-besteffort-pod88a4ed3c_b6a4_4138_826b_621c4a7e3007.slice. Apr 21 09:58:51.318577 containerd[1491]: time="2026-04-21T09:58:51.318537119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5t5f,Uid:88a4ed3c-b6a4-4138-826b-621c4a7e3007,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:51.324626 containerd[1491]: time="2026-04-21T09:58:51.324577599Z" level=error msg="Failed to destroy network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.325299 containerd[1491]: time="2026-04-21T09:58:51.325256999Z" level=error msg="encountered an error cleaning up failed sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.325374 containerd[1491]: time="2026-04-21T09:58:51.325327079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5459f6b57d-nbrdx,Uid:6fb1502d-d737-4fec-9a69-7870084d206a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.326133 kubelet[2531]: E0421 09:58:51.325577 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.326133 kubelet[2531]: E0421 09:58:51.325707 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5459f6b57d-nbrdx" Apr 21 09:58:51.326133 kubelet[2531]: E0421 09:58:51.325724 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5459f6b57d-nbrdx" Apr 21 09:58:51.327812 kubelet[2531]: E0421 09:58:51.325781 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5459f6b57d-nbrdx_calico-system(6fb1502d-d737-4fec-9a69-7870084d206a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5459f6b57d-nbrdx_calico-system(6fb1502d-d737-4fec-9a69-7870084d206a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5459f6b57d-nbrdx" podUID="6fb1502d-d737-4fec-9a69-7870084d206a" Apr 21 09:58:51.342393 containerd[1491]: time="2026-04-21T09:58:51.342334079Z" level=error msg="Failed to destroy network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.343023 containerd[1491]: time="2026-04-21T09:58:51.342984719Z" level=error msg="encountered an error cleaning up failed sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.343289 containerd[1491]: time="2026-04-21T09:58:51.343256039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gbnk5,Uid:f14c36e9-166a-4256-9de6-cdefe0504d6e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.343834 kubelet[2531]: E0421 09:58:51.343756 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.344102 kubelet[2531]: E0421 09:58:51.344080 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gbnk5" Apr 21 09:58:51.344354 kubelet[2531]: E0421 09:58:51.344229 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-gbnk5" Apr 21 09:58:51.344354 kubelet[2531]: E0421 09:58:51.344317 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-gbnk5_kube-system(f14c36e9-166a-4256-9de6-cdefe0504d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-gbnk5_kube-system(f14c36e9-166a-4256-9de6-cdefe0504d6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-gbnk5" podUID="f14c36e9-166a-4256-9de6-cdefe0504d6e" Apr 21 09:58:51.351099 containerd[1491]: time="2026-04-21T09:58:51.350949919Z" level=error msg="Failed to destroy network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.353445 containerd[1491]: time="2026-04-21T09:58:51.352369119Z" level=error msg="encountered an error cleaning up failed sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.354030 containerd[1491]: time="2026-04-21T09:58:51.353996159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5459f6b57d-px5v8,Uid:f066e964-91d8-4473-bead-51ea9c76986c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.354931 kubelet[2531]: E0421 09:58:51.354501 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.354931 kubelet[2531]: E0421 09:58:51.354560 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5459f6b57d-px5v8" Apr 21 09:58:51.354931 kubelet[2531]: E0421 09:58:51.354580 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5459f6b57d-px5v8" Apr 21 09:58:51.355079 kubelet[2531]: E0421 09:58:51.354629 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5459f6b57d-px5v8_calico-system(f066e964-91d8-4473-bead-51ea9c76986c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5459f6b57d-px5v8_calico-system(f066e964-91d8-4473-bead-51ea9c76986c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5459f6b57d-px5v8" podUID="f066e964-91d8-4473-bead-51ea9c76986c" Apr 21 09:58:51.370055 containerd[1491]: time="2026-04-21T09:58:51.369820399Z" level=error msg="Failed to destroy network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.371133 containerd[1491]: time="2026-04-21T09:58:51.370975799Z" level=error msg="encountered an error cleaning up failed sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.371133 containerd[1491]: time="2026-04-21T09:58:51.371056359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-gc82f,Uid:5cf12a5d-710e-475b-9da9-806fe2f83ca0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.372902 kubelet[2531]: E0421 09:58:51.371307 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.372902 kubelet[2531]: E0421 09:58:51.371377 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-gc82f" Apr 21 09:58:51.372902 kubelet[2531]: E0421 09:58:51.371474 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-gc82f" Apr 21 09:58:51.373012 kubelet[2531]: E0421 09:58:51.371548 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-gc82f_calico-system(5cf12a5d-710e-475b-9da9-806fe2f83ca0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-gc82f_calico-system(5cf12a5d-710e-475b-9da9-806fe2f83ca0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-gc82f" podUID="5cf12a5d-710e-475b-9da9-806fe2f83ca0" Apr 21 09:58:51.406639 containerd[1491]: time="2026-04-21T09:58:51.406584799Z" level=error msg="Failed to destroy network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.407043 containerd[1491]: time="2026-04-21T09:58:51.407012359Z" level=error msg="encountered an error cleaning up failed sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.407098 containerd[1491]: time="2026-04-21T09:58:51.407073639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5t5f,Uid:88a4ed3c-b6a4-4138-826b-621c4a7e3007,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.407457 kubelet[2531]: E0421 09:58:51.407327 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.407457 kubelet[2531]: E0421 09:58:51.407405 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5t5f" Apr 21 09:58:51.407457 kubelet[2531]: E0421 09:58:51.407422 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5t5f" Apr 21 09:58:51.408630 kubelet[2531]: E0421 09:58:51.407654 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h5t5f_calico-system(88a4ed3c-b6a4-4138-826b-621c4a7e3007)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h5t5f_calico-system(88a4ed3c-b6a4-4138-826b-621c4a7e3007)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:51.488407 kubelet[2531]: I0421 09:58:51.487757 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:58:51.489664 containerd[1491]: time="2026-04-21T09:58:51.489359199Z" level=info msg="StopPodSandbox for \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\"" Apr 21 09:58:51.491087 containerd[1491]: time="2026-04-21T09:58:51.490037199Z" level=info msg="Ensure that sandbox 90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb in task-service has been cleanup successfully" Apr 21 09:58:51.491643 kubelet[2531]: I0421 09:58:51.491622 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:58:51.493366 containerd[1491]: time="2026-04-21T09:58:51.493335639Z" level=info msg="StopPodSandbox for \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\"" Apr 21 09:58:51.493528 containerd[1491]: time="2026-04-21T09:58:51.493510639Z" level=info msg="Ensure that sandbox fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263 in task-service has been cleanup successfully" Apr 21 09:58:51.499032 kubelet[2531]: I0421 09:58:51.498570 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:58:51.499642 containerd[1491]: time="2026-04-21T09:58:51.499491839Z" level=info msg="StopPodSandbox for \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\"" Apr 21 09:58:51.500141 containerd[1491]: time="2026-04-21T09:58:51.499909559Z" level=info msg="Ensure that sandbox 684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5 in task-service has been cleanup successfully" Apr 21 09:58:51.520886 kubelet[2531]: I0421 09:58:51.520228 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:58:51.523987 containerd[1491]: time="2026-04-21T09:58:51.523664959Z" level=info msg="StopPodSandbox for \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\"" Apr 21 09:58:51.525436 containerd[1491]: time="2026-04-21T09:58:51.524472159Z" level=info msg="Ensure that sandbox 63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74 in task-service has been cleanup successfully" Apr 21 09:58:51.537682 kubelet[2531]: I0421 09:58:51.536960 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:58:51.545631 containerd[1491]: time="2026-04-21T09:58:51.544544359Z" level=info msg="StopPodSandbox for \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\"" Apr 21 09:58:51.547416 containerd[1491]: time="2026-04-21T09:58:51.546195719Z" level=info msg="Ensure that sandbox a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf in task-service has been cleanup successfully" Apr 21 09:58:51.556456 containerd[1491]: time="2026-04-21T09:58:51.554664239Z" level=info msg="CreateContainer within sandbox \"f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 09:58:51.564486 kubelet[2531]: I0421 09:58:51.560898 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:58:51.564701 containerd[1491]: time="2026-04-21T09:58:51.561655439Z" level=info msg="StopPodSandbox for \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\"" Apr 21 09:58:51.564701 containerd[1491]: time="2026-04-21T09:58:51.561826879Z" level=info msg="Ensure that sandbox 7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e in task-service has been cleanup successfully" Apr 21 09:58:51.565428 kubelet[2531]: I0421 09:58:51.564877 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:58:51.566420 containerd[1491]: time="2026-04-21T09:58:51.565373119Z" level=info msg="StopPodSandbox for \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\"" Apr 21 09:58:51.566675 containerd[1491]: time="2026-04-21T09:58:51.566274439Z" level=info msg="Ensure that sandbox 78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902 in task-service has been cleanup successfully" Apr 21 09:58:51.584190 kubelet[2531]: I0421 09:58:51.584155 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:58:51.586299 containerd[1491]: time="2026-04-21T09:58:51.586138519Z" level=info msg="StopPodSandbox for \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\"" Apr 21 09:58:51.586455 containerd[1491]: time="2026-04-21T09:58:51.586324959Z" level=info msg="Ensure that sandbox 887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310 in task-service has been cleanup successfully" Apr 21 09:58:51.616969 containerd[1491]: time="2026-04-21T09:58:51.616907399Z" level=error msg="StopPodSandbox for \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\" failed" error="failed to destroy network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.617586 kubelet[2531]: E0421 09:58:51.617552 2531 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:58:51.617784 kubelet[2531]: E0421 09:58:51.617733 2531 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263"} Apr 21 09:58:51.618065 kubelet[2531]: E0421 09:58:51.618043 2531 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6fb1502d-d737-4fec-9a69-7870084d206a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:51.618210 kubelet[2531]: E0421 09:58:51.618180 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6fb1502d-d737-4fec-9a69-7870084d206a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5459f6b57d-nbrdx" podUID="6fb1502d-d737-4fec-9a69-7870084d206a" Apr 21 09:58:51.628239 containerd[1491]: time="2026-04-21T09:58:51.628180759Z" level=info msg="CreateContainer within sandbox \"f61d43c22045acc25cd638557e26660480e7d3bf04b506bb0f63b589b299531a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cf3749648a71553839a1eea09dc1508283768de5819b0267e7eb66d2e82a4d9a\"" Apr 21 09:58:51.635707 containerd[1491]: time="2026-04-21T09:58:51.634543439Z" level=info msg="StartContainer for \"cf3749648a71553839a1eea09dc1508283768de5819b0267e7eb66d2e82a4d9a\"" Apr 21 09:58:51.636237 containerd[1491]: time="2026-04-21T09:58:51.635509479Z" level=error msg="StopPodSandbox for \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\" failed" error="failed to destroy network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.638451 kubelet[2531]: E0421 09:58:51.638398 2531 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:58:51.638924 kubelet[2531]: E0421 09:58:51.638453 2531 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb"} Apr 21 09:58:51.638924 kubelet[2531]: E0421 09:58:51.638701 2531 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f066e964-91d8-4473-bead-51ea9c76986c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:51.639046 kubelet[2531]: E0421 09:58:51.638945 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f066e964-91d8-4473-bead-51ea9c76986c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5459f6b57d-px5v8" podUID="f066e964-91d8-4473-bead-51ea9c76986c" Apr 21 09:58:51.660084 containerd[1491]: time="2026-04-21T09:58:51.660033519Z" level=error msg="StopPodSandbox for \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\" failed" error="failed to destroy network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.660693 kubelet[2531]: E0421 09:58:51.660628 2531 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:58:51.661049 kubelet[2531]: E0421 09:58:51.660693 2531 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5"} Apr 21 09:58:51.661049 kubelet[2531]: E0421 09:58:51.660727 2531 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f14c36e9-166a-4256-9de6-cdefe0504d6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:51.661049 kubelet[2531]: E0421 09:58:51.660755 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f14c36e9-166a-4256-9de6-cdefe0504d6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-gbnk5" podUID="f14c36e9-166a-4256-9de6-cdefe0504d6e" Apr 21 09:58:51.693392 containerd[1491]: time="2026-04-21T09:58:51.693187399Z" level=error msg="StopPodSandbox for \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\" failed" error="failed to destroy network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.693544 kubelet[2531]: E0421 09:58:51.693451 2531 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:58:51.693544 kubelet[2531]: E0421 09:58:51.693497 2531 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310"} Apr 21 09:58:51.693544 kubelet[2531]: E0421 09:58:51.693527 2531 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:51.693777 kubelet[2531]: E0421 09:58:51.693555 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c9bb4cfc-2r98m" podUID="9d2eb78e-0b26-4ad7-9e36-2ec441961e5e" Apr 21 09:58:51.694197 containerd[1491]: time="2026-04-21T09:58:51.694155199Z" level=error msg="StopPodSandbox for \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\" failed" error="failed to destroy network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.694623 kubelet[2531]: E0421 09:58:51.694368 2531 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:58:51.694623 kubelet[2531]: E0421 09:58:51.694514 2531 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902"} Apr 21 09:58:51.694623 kubelet[2531]: E0421 09:58:51.694550 2531 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e864a05e-cd95-4bff-bcff-cd119ea67d7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:51.694623 kubelet[2531]: E0421 09:58:51.694581 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e864a05e-cd95-4bff-bcff-cd119ea67d7b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77f77bf9b9-58b2r" podUID="e864a05e-cd95-4bff-bcff-cd119ea67d7b" Apr 21 09:58:51.703700 containerd[1491]: time="2026-04-21T09:58:51.703554439Z" level=error msg="StopPodSandbox for \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\" failed" error="failed to destroy network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.703862 kubelet[2531]: E0421 09:58:51.703821 2531 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:58:51.704601 kubelet[2531]: E0421 09:58:51.703869 2531 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74"} Apr 21 09:58:51.704601 kubelet[2531]: E0421 09:58:51.704073 2531 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da192526-415e-407b-a486-c9ee15869745\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:51.704601 kubelet[2531]: E0421 09:58:51.704112 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da192526-415e-407b-a486-c9ee15869745\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-cp2pp" podUID="da192526-415e-407b-a486-c9ee15869745" Apr 21 09:58:51.717519 containerd[1491]: time="2026-04-21T09:58:51.717455959Z" level=error msg="StopPodSandbox for \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\" failed" error="failed to destroy network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.717667 containerd[1491]: time="2026-04-21T09:58:51.717636999Z" level=error msg="StopPodSandbox for \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\" failed" error="failed to destroy network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 09:58:51.717931 kubelet[2531]: E0421 09:58:51.717878 2531 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:58:51.717997 kubelet[2531]: E0421 09:58:51.717948 2531 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e"} Apr 21 09:58:51.717997 kubelet[2531]: E0421 09:58:51.717878 2531 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:58:51.718062 kubelet[2531]: E0421 09:58:51.717997 2531 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf"} Apr 21 09:58:51.718062 kubelet[2531]: E0421 09:58:51.718023 2531 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88a4ed3c-b6a4-4138-826b-621c4a7e3007\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:51.718062 kubelet[2531]: E0421 09:58:51.718051 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88a4ed3c-b6a4-4138-826b-621c4a7e3007\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h5t5f" podUID="88a4ed3c-b6a4-4138-826b-621c4a7e3007" Apr 21 09:58:51.718179 kubelet[2531]: E0421 09:58:51.717978 2531 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5cf12a5d-710e-475b-9da9-806fe2f83ca0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 09:58:51.718179 kubelet[2531]: E0421 09:58:51.718103 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5cf12a5d-710e-475b-9da9-806fe2f83ca0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-gc82f" podUID="5cf12a5d-710e-475b-9da9-806fe2f83ca0" Apr 21 09:58:51.726619 systemd[1]: Started cri-containerd-cf3749648a71553839a1eea09dc1508283768de5819b0267e7eb66d2e82a4d9a.scope - libcontainer container cf3749648a71553839a1eea09dc1508283768de5819b0267e7eb66d2e82a4d9a. Apr 21 09:58:51.761328 containerd[1491]: time="2026-04-21T09:58:51.761190759Z" level=info msg="StartContainer for \"cf3749648a71553839a1eea09dc1508283768de5819b0267e7eb66d2e82a4d9a\" returns successfully" Apr 21 09:58:52.593680 containerd[1491]: time="2026-04-21T09:58:52.592259359Z" level=info msg="StopPodSandbox for \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\"" Apr 21 09:58:52.635315 kubelet[2531]: I0421 09:58:52.635230 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-zg9t2" podStartSLOduration=1.804736719 podStartE2EDuration="20.635213119s" podCreationTimestamp="2026-04-21 09:58:32 +0000 UTC" firstStartedPulling="2026-04-21 09:58:32.685103439 +0000 UTC m=+21.535234761" lastFinishedPulling="2026-04-21 09:58:51.515579839 +0000 UTC m=+40.365711161" observedRunningTime="2026-04-21 09:58:52.635132719 +0000 UTC m=+41.485264041" watchObservedRunningTime="2026-04-21 09:58:52.635213119 +0000 UTC m=+41.485344521" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.677 [INFO][3767] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.677 [INFO][3767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" iface="eth0" netns="/var/run/netns/cni-ddeb63a3-c141-61b0-0964-a8d808a0a535" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.678 [INFO][3767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" iface="eth0" netns="/var/run/netns/cni-ddeb63a3-c141-61b0-0964-a8d808a0a535" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.678 [INFO][3767] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" iface="eth0" netns="/var/run/netns/cni-ddeb63a3-c141-61b0-0964-a8d808a0a535" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.678 [INFO][3767] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.678 [INFO][3767] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.722 [INFO][3775] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" HandleID="k8s-pod-network.887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.722 [INFO][3775] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.722 [INFO][3775] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.736 [WARNING][3775] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" HandleID="k8s-pod-network.887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.736 [INFO][3775] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" HandleID="k8s-pod-network.887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.738 [INFO][3775] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:58:52.744028 containerd[1491]: 2026-04-21 09:58:52.741 [INFO][3767] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:58:52.747054 containerd[1491]: time="2026-04-21T09:58:52.746540439Z" level=info msg="TearDown network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\" successfully" Apr 21 09:58:52.747054 containerd[1491]: time="2026-04-21T09:58:52.746595799Z" level=info msg="StopPodSandbox for \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\" returns successfully" Apr 21 09:58:52.747353 systemd[1]: run-netns-cni\x2dddeb63a3\x2dc141\x2d61b0\x2d0964\x2da8d808a0a535.mount: Deactivated successfully. Apr 21 09:58:52.806448 kubelet[2531]: I0421 09:58:52.805843 2531 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-nginx-config\" (UniqueName: \"kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-nginx-config\") pod \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\" (UID: \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\") " Apr 21 09:58:52.806448 kubelet[2531]: I0421 09:58:52.805933 2531 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-kube-api-access-jwjv9\" (UniqueName: \"kubernetes.io/projected/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-kube-api-access-jwjv9\") pod \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\" (UID: \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\") " Apr 21 09:58:52.806448 kubelet[2531]: I0421 09:58:52.805976 2531 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-backend-key-pair\") pod \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\" (UID: \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\") " Apr 21 09:58:52.806448 kubelet[2531]: I0421 09:58:52.806009 2531 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-ca-bundle\") pod \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\" (UID: \"9d2eb78e-0b26-4ad7-9e36-2ec441961e5e\") " Apr 21 09:58:52.812781 kubelet[2531]: I0421 09:58:52.812709 2531 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-nginx-config" pod "9d2eb78e-0b26-4ad7-9e36-2ec441961e5e" (UID: "9d2eb78e-0b26-4ad7-9e36-2ec441961e5e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 09:58:52.813179 kubelet[2531]: I0421 09:58:52.813141 2531 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-ca-bundle" pod "9d2eb78e-0b26-4ad7-9e36-2ec441961e5e" (UID: "9d2eb78e-0b26-4ad7-9e36-2ec441961e5e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 09:58:52.816615 kubelet[2531]: I0421 09:58:52.816574 2531 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-kube-api-access-jwjv9" pod "9d2eb78e-0b26-4ad7-9e36-2ec441961e5e" (UID: "9d2eb78e-0b26-4ad7-9e36-2ec441961e5e"). InnerVolumeSpecName "kube-api-access-jwjv9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 09:58:52.817240 systemd[1]: var-lib-kubelet-pods-9d2eb78e\x2d0b26\x2d4ad7\x2d9e36\x2d2ec441961e5e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djwjv9.mount: Deactivated successfully. Apr 21 09:58:52.819728 kubelet[2531]: I0421 09:58:52.819645 2531 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-backend-key-pair" pod "9d2eb78e-0b26-4ad7-9e36-2ec441961e5e" (UID: "9d2eb78e-0b26-4ad7-9e36-2ec441961e5e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 09:58:52.822095 systemd[1]: var-lib-kubelet-pods-9d2eb78e\x2d0b26\x2d4ad7\x2d9e36\x2d2ec441961e5e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 09:58:52.907043 kubelet[2531]: I0421 09:58:52.906487 2531 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-backend-key-pair\") on node \"ci-4081-3-7-a-ee081c135b\" DevicePath \"\"" Apr 21 09:58:52.907043 kubelet[2531]: I0421 09:58:52.906602 2531 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-whisker-ca-bundle\") on node \"ci-4081-3-7-a-ee081c135b\" DevicePath \"\"" Apr 21 09:58:52.907043 kubelet[2531]: I0421 09:58:52.906655 2531 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-nginx-config\") on node \"ci-4081-3-7-a-ee081c135b\" DevicePath \"\"" Apr 21 09:58:52.907043 kubelet[2531]: I0421 09:58:52.906708 2531 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jwjv9\" (UniqueName: \"kubernetes.io/projected/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e-kube-api-access-jwjv9\") on node \"ci-4081-3-7-a-ee081c135b\" DevicePath \"\"" Apr 21 09:58:53.312266 systemd[1]: Removed slice kubepods-besteffort-pod9d2eb78e_0b26_4ad7_9e36_2ec441961e5e.slice - libcontainer container kubepods-besteffort-pod9d2eb78e_0b26_4ad7_9e36_2ec441961e5e.slice. Apr 21 09:58:53.667413 kernel: calico-node[3798]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 09:58:53.724665 systemd[1]: Created slice kubepods-besteffort-pod817d02c6_1e32_4fa1_ab1d_462bb539e2c5.slice - libcontainer container kubepods-besteffort-pod817d02c6_1e32_4fa1_ab1d_462bb539e2c5.slice. Apr 21 09:58:53.826059 kubelet[2531]: I0421 09:58:53.825706 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/817d02c6-1e32-4fa1-ab1d-462bb539e2c5-whisker-backend-key-pair\") pod \"whisker-c64446cc4-8t7mw\" (UID: \"817d02c6-1e32-4fa1-ab1d-462bb539e2c5\") " pod="calico-system/whisker-c64446cc4-8t7mw" Apr 21 09:58:53.826059 kubelet[2531]: I0421 09:58:53.825766 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/817d02c6-1e32-4fa1-ab1d-462bb539e2c5-whisker-ca-bundle\") pod \"whisker-c64446cc4-8t7mw\" (UID: \"817d02c6-1e32-4fa1-ab1d-462bb539e2c5\") " pod="calico-system/whisker-c64446cc4-8t7mw" Apr 21 09:58:53.826059 kubelet[2531]: I0421 09:58:53.825828 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9gmw\" (UniqueName: \"kubernetes.io/projected/817d02c6-1e32-4fa1-ab1d-462bb539e2c5-kube-api-access-n9gmw\") pod \"whisker-c64446cc4-8t7mw\" (UID: \"817d02c6-1e32-4fa1-ab1d-462bb539e2c5\") " pod="calico-system/whisker-c64446cc4-8t7mw" Apr 21 09:58:53.826059 kubelet[2531]: I0421 09:58:53.825851 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/817d02c6-1e32-4fa1-ab1d-462bb539e2c5-nginx-config\") pod \"whisker-c64446cc4-8t7mw\" (UID: \"817d02c6-1e32-4fa1-ab1d-462bb539e2c5\") " pod="calico-system/whisker-c64446cc4-8t7mw" Apr 21 09:58:54.042086 containerd[1491]: time="2026-04-21T09:58:54.041954607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c64446cc4-8t7mw,Uid:817d02c6-1e32-4fa1-ab1d-462bb539e2c5,Namespace:calico-system,Attempt:0,}" Apr 21 09:58:54.235809 systemd-networkd[1388]: vxlan.calico: Link UP Apr 21 09:58:54.235819 systemd-networkd[1388]: vxlan.calico: Gained carrier Apr 21 09:58:54.259725 systemd-networkd[1388]: cali15c282b56c4: Link UP Apr 21 09:58:54.261732 systemd-networkd[1388]: cali15c282b56c4: Gained carrier Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.107 [INFO][3941] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0 whisker-c64446cc4- calico-system 817d02c6-1e32-4fa1-ab1d-462bb539e2c5 910 0 2026-04-21 09:58:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c64446cc4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-a-ee081c135b whisker-c64446cc4-8t7mw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali15c282b56c4 [] [] }} ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Namespace="calico-system" Pod="whisker-c64446cc4-8t7mw" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.108 [INFO][3941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Namespace="calico-system" Pod="whisker-c64446cc4-8t7mw" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.157 [INFO][3954] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" HandleID="k8s-pod-network.59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.170 [INFO][3954] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" HandleID="k8s-pod-network.59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-ee081c135b", "pod":"whisker-c64446cc4-8t7mw", "timestamp":"2026-04-21 09:58:54.157564406 +0000 UTC"}, Hostname:"ci-4081-3-7-a-ee081c135b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400026b080)} Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.170 [INFO][3954] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.170 [INFO][3954] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.170 [INFO][3954] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-ee081c135b' Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.175 [INFO][3954] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.188 [INFO][3954] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.197 [INFO][3954] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.199 [INFO][3954] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.205 [INFO][3954] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.205 [INFO][3954] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.210 [INFO][3954] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.216 [INFO][3954] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.225 [INFO][3954] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.193/26] block=192.168.124.192/26 handle="k8s-pod-network.59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.226 [INFO][3954] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.193/26] handle="k8s-pod-network.59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.226 [INFO][3954] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:58:54.287476 containerd[1491]: 2026-04-21 09:58:54.226 [INFO][3954] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.193/26] IPv6=[] ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" HandleID="k8s-pod-network.59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" Apr 21 09:58:54.288093 containerd[1491]: 2026-04-21 09:58:54.231 [INFO][3941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Namespace="calico-system" Pod="whisker-c64446cc4-8t7mw" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0", GenerateName:"whisker-c64446cc4-", Namespace:"calico-system", SelfLink:"", UID:"817d02c6-1e32-4fa1-ab1d-462bb539e2c5", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c64446cc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"", Pod:"whisker-c64446cc4-8t7mw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali15c282b56c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:58:54.288093 containerd[1491]: 2026-04-21 09:58:54.239 [INFO][3941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.193/32] ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Namespace="calico-system" Pod="whisker-c64446cc4-8t7mw" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" Apr 21 09:58:54.288093 containerd[1491]: 2026-04-21 09:58:54.239 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15c282b56c4 ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Namespace="calico-system" Pod="whisker-c64446cc4-8t7mw" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" Apr 21 09:58:54.288093 containerd[1491]: 2026-04-21 09:58:54.262 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Namespace="calico-system" Pod="whisker-c64446cc4-8t7mw" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" Apr 21 09:58:54.288093 containerd[1491]: 2026-04-21 09:58:54.263 [INFO][3941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Namespace="calico-system" Pod="whisker-c64446cc4-8t7mw" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0", GenerateName:"whisker-c64446cc4-", Namespace:"calico-system", SelfLink:"", UID:"817d02c6-1e32-4fa1-ab1d-462bb539e2c5", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c64446cc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b", Pod:"whisker-c64446cc4-8t7mw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali15c282b56c4", MAC:"4e:03:01:c9:0e:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:58:54.288093 containerd[1491]: 2026-04-21 09:58:54.282 [INFO][3941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b" Namespace="calico-system" Pod="whisker-c64446cc4-8t7mw" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--c64446cc4--8t7mw-eth0" Apr 21 09:58:54.324016 containerd[1491]: time="2026-04-21T09:58:54.322286903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:58:54.324016 containerd[1491]: time="2026-04-21T09:58:54.322350024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:58:54.324016 containerd[1491]: time="2026-04-21T09:58:54.322360504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:54.324016 containerd[1491]: time="2026-04-21T09:58:54.322634547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:58:54.348583 systemd[1]: Started cri-containerd-59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b.scope - libcontainer container 59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b. Apr 21 09:58:54.397064 containerd[1491]: time="2026-04-21T09:58:54.396966301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c64446cc4-8t7mw,Uid:817d02c6-1e32-4fa1-ab1d-462bb539e2c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b\"" Apr 21 09:58:54.402321 containerd[1491]: time="2026-04-21T09:58:54.401351433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 09:58:55.305931 kubelet[2531]: I0421 09:58:55.305870 2531 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9d2eb78e-0b26-4ad7-9e36-2ec441961e5e" path="/var/lib/kubelet/pods/9d2eb78e-0b26-4ad7-9e36-2ec441961e5e/volumes" Apr 21 09:58:55.401955 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Apr 21 09:58:55.785356 systemd-networkd[1388]: cali15c282b56c4: Gained IPv6LL Apr 21 09:58:56.291452 containerd[1491]: time="2026-04-21T09:58:56.291391464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:56.293350 containerd[1491]: time="2026-04-21T09:58:56.293049762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 21 09:58:56.293350 containerd[1491]: time="2026-04-21T09:58:56.293276484Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:56.296493 containerd[1491]: time="2026-04-21T09:58:56.296358516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:56.298371 containerd[1491]: time="2026-04-21T09:58:56.298212655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.895947132s" Apr 21 09:58:56.298371 containerd[1491]: time="2026-04-21T09:58:56.298260855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 21 09:58:56.306057 containerd[1491]: time="2026-04-21T09:58:56.306004895Z" level=info msg="CreateContainer within sandbox \"59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 09:58:56.321987 containerd[1491]: time="2026-04-21T09:58:56.321914580Z" level=info msg="CreateContainer within sandbox \"59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"81fc63599fac3b3a9444ce06395925e08ad8fb010f559f117a1551285cd09d10\"" Apr 21 09:58:56.324853 containerd[1491]: time="2026-04-21T09:58:56.323476596Z" level=info msg="StartContainer for \"81fc63599fac3b3a9444ce06395925e08ad8fb010f559f117a1551285cd09d10\"" Apr 21 09:58:56.360613 systemd[1]: Started cri-containerd-81fc63599fac3b3a9444ce06395925e08ad8fb010f559f117a1551285cd09d10.scope - libcontainer container 81fc63599fac3b3a9444ce06395925e08ad8fb010f559f117a1551285cd09d10. Apr 21 09:58:56.397218 containerd[1491]: time="2026-04-21T09:58:56.397175198Z" level=info msg="StartContainer for \"81fc63599fac3b3a9444ce06395925e08ad8fb010f559f117a1551285cd09d10\" returns successfully" Apr 21 09:58:56.402070 containerd[1491]: time="2026-04-21T09:58:56.401936847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 09:58:58.738572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990594495.mount: Deactivated successfully. Apr 21 09:58:58.768844 containerd[1491]: time="2026-04-21T09:58:58.768767936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:58.770610 containerd[1491]: time="2026-04-21T09:58:58.770227430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 21 09:58:58.771511 containerd[1491]: time="2026-04-21T09:58:58.771479681Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:58.774909 containerd[1491]: time="2026-04-21T09:58:58.774871432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:58:58.776166 containerd[1491]: time="2026-04-21T09:58:58.776120843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.374136516s" Apr 21 09:58:58.776166 containerd[1491]: time="2026-04-21T09:58:58.776164604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 21 09:58:58.784085 containerd[1491]: time="2026-04-21T09:58:58.784024475Z" level=info msg="CreateContainer within sandbox \"59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 09:58:58.800278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386515166.mount: Deactivated successfully. Apr 21 09:58:58.803055 containerd[1491]: time="2026-04-21T09:58:58.802937087Z" level=info msg="CreateContainer within sandbox \"59c740525684f937b9cb3169e0a82b0f6346022b0525e5b1161ce6ed3949e45b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ef0e6d081c18fde09629a3a008a27adcec45468e3c72855cec93266ad761417a\"" Apr 21 09:58:58.805069 containerd[1491]: time="2026-04-21T09:58:58.804723503Z" level=info msg="StartContainer for \"ef0e6d081c18fde09629a3a008a27adcec45468e3c72855cec93266ad761417a\"" Apr 21 09:58:58.839867 systemd[1]: Started cri-containerd-ef0e6d081c18fde09629a3a008a27adcec45468e3c72855cec93266ad761417a.scope - libcontainer container ef0e6d081c18fde09629a3a008a27adcec45468e3c72855cec93266ad761417a. Apr 21 09:58:58.880032 containerd[1491]: time="2026-04-21T09:58:58.879982266Z" level=info msg="StartContainer for \"ef0e6d081c18fde09629a3a008a27adcec45468e3c72855cec93266ad761417a\" returns successfully" Apr 21 09:58:59.635967 kubelet[2531]: I0421 09:58:59.635853 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-c64446cc4-8t7mw" podStartSLOduration=2.258957779 podStartE2EDuration="6.63582917s" podCreationTimestamp="2026-04-21 09:58:53 +0000 UTC" firstStartedPulling="2026-04-21 09:58:54.400982228 +0000 UTC m=+43.251113550" lastFinishedPulling="2026-04-21 09:58:58.777853579 +0000 UTC m=+47.627984941" observedRunningTime="2026-04-21 09:58:59.63573653 +0000 UTC m=+48.485867932" watchObservedRunningTime="2026-04-21 09:58:59.63582917 +0000 UTC m=+48.485960532" Apr 21 09:59:02.304950 containerd[1491]: time="2026-04-21T09:59:02.304816198Z" level=info msg="StopPodSandbox for \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\"" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.368 [INFO][4239] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.369 [INFO][4239] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" iface="eth0" netns="/var/run/netns/cni-b44f0587-657c-7b56-e8fb-98fb98b866c2" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.369 [INFO][4239] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" iface="eth0" netns="/var/run/netns/cni-b44f0587-657c-7b56-e8fb-98fb98b866c2" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.370 [INFO][4239] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" iface="eth0" netns="/var/run/netns/cni-b44f0587-657c-7b56-e8fb-98fb98b866c2" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.370 [INFO][4239] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.370 [INFO][4239] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.393 [INFO][4246] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" HandleID="k8s-pod-network.a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.393 [INFO][4246] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.393 [INFO][4246] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.407 [WARNING][4246] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" HandleID="k8s-pod-network.a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.407 [INFO][4246] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" HandleID="k8s-pod-network.a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.411 [INFO][4246] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:02.416219 containerd[1491]: 2026-04-21 09:59:02.413 [INFO][4239] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:02.419302 containerd[1491]: time="2026-04-21T09:59:02.416974825Z" level=info msg="TearDown network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\" successfully" Apr 21 09:59:02.419302 containerd[1491]: time="2026-04-21T09:59:02.417016186Z" level=info msg="StopPodSandbox for \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\" returns successfully" Apr 21 09:59:02.422968 systemd[1]: run-netns-cni\x2db44f0587\x2d657c\x2d7b56\x2de8fb\x2d98fb98b866c2.mount: Deactivated successfully. Apr 21 09:59:02.423365 containerd[1491]: time="2026-04-21T09:59:02.423200989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5t5f,Uid:88a4ed3c-b6a4-4138-826b-621c4a7e3007,Namespace:calico-system,Attempt:1,}" Apr 21 09:59:02.658477 systemd-networkd[1388]: cali6d4ccc8657d: Link UP Apr 21 09:59:02.659307 systemd-networkd[1388]: cali6d4ccc8657d: Gained carrier Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.500 [INFO][4252] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0 csi-node-driver- calico-system 88a4ed3c-b6a4-4138-826b-621c4a7e3007 947 0 2026-04-21 09:58:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-a-ee081c135b csi-node-driver-h5t5f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6d4ccc8657d [] [] }} ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Namespace="calico-system" Pod="csi-node-driver-h5t5f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.501 [INFO][4252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Namespace="calico-system" Pod="csi-node-driver-h5t5f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.567 [INFO][4264] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" HandleID="k8s-pod-network.35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.597 [INFO][4264] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" HandleID="k8s-pod-network.35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273f10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-ee081c135b", "pod":"csi-node-driver-h5t5f", "timestamp":"2026-04-21 09:59:02.567436201 +0000 UTC"}, Hostname:"ci-4081-3-7-a-ee081c135b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004fc000)} Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.598 [INFO][4264] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.598 [INFO][4264] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.598 [INFO][4264] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-ee081c135b' Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.604 [INFO][4264] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.614 [INFO][4264] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.622 [INFO][4264] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.625 [INFO][4264] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.629 [INFO][4264] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.630 [INFO][4264] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.632 [INFO][4264] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2 Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.639 [INFO][4264] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.647 [INFO][4264] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.194/26] block=192.168.124.192/26 handle="k8s-pod-network.35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.648 [INFO][4264] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.194/26] handle="k8s-pod-network.35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.648 [INFO][4264] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:02.681738 containerd[1491]: 2026-04-21 09:59:02.648 [INFO][4264] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.194/26] IPv6=[] ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" HandleID="k8s-pod-network.35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.682440 containerd[1491]: 2026-04-21 09:59:02.652 [INFO][4252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Namespace="calico-system" Pod="csi-node-driver-h5t5f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88a4ed3c-b6a4-4138-826b-621c4a7e3007", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"", Pod:"csi-node-driver-h5t5f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d4ccc8657d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:02.682440 containerd[1491]: 2026-04-21 09:59:02.652 [INFO][4252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.194/32] ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Namespace="calico-system" Pod="csi-node-driver-h5t5f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.682440 containerd[1491]: 2026-04-21 09:59:02.652 [INFO][4252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d4ccc8657d ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Namespace="calico-system" Pod="csi-node-driver-h5t5f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.682440 containerd[1491]: 2026-04-21 09:59:02.657 [INFO][4252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Namespace="calico-system" Pod="csi-node-driver-h5t5f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.682440 containerd[1491]: 2026-04-21 09:59:02.659 [INFO][4252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Namespace="calico-system" Pod="csi-node-driver-h5t5f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88a4ed3c-b6a4-4138-826b-621c4a7e3007", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2", Pod:"csi-node-driver-h5t5f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d4ccc8657d", MAC:"d2:27:e9:3f:46:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:02.682440 containerd[1491]: 2026-04-21 09:59:02.676 [INFO][4252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2" Namespace="calico-system" Pod="csi-node-driver-h5t5f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:02.708218 containerd[1491]: time="2026-04-21T09:59:02.708112588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:02.708365 containerd[1491]: time="2026-04-21T09:59:02.708255789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:02.708365 containerd[1491]: time="2026-04-21T09:59:02.708291309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:02.709593 containerd[1491]: time="2026-04-21T09:59:02.708977634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:02.740614 systemd[1]: Started cri-containerd-35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2.scope - libcontainer container 35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2. Apr 21 09:59:02.782247 containerd[1491]: time="2026-04-21T09:59:02.781546983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5t5f,Uid:88a4ed3c-b6a4-4138-826b-621c4a7e3007,Namespace:calico-system,Attempt:1,} returns sandbox id \"35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2\"" Apr 21 09:59:02.785591 containerd[1491]: time="2026-04-21T09:59:02.785340690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 09:59:03.309893 containerd[1491]: time="2026-04-21T09:59:03.309810353Z" level=info msg="StopPodSandbox for \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\"" Apr 21 09:59:03.311699 containerd[1491]: time="2026-04-21T09:59:03.311653725Z" level=info msg="StopPodSandbox for \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\"" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.393 [INFO][4353] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.394 [INFO][4353] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" iface="eth0" netns="/var/run/netns/cni-f6606aac-97c7-fd2c-4eba-4163151cea5c" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.395 [INFO][4353] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" iface="eth0" netns="/var/run/netns/cni-f6606aac-97c7-fd2c-4eba-4163151cea5c" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.399 [INFO][4353] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" iface="eth0" netns="/var/run/netns/cni-f6606aac-97c7-fd2c-4eba-4163151cea5c" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.399 [INFO][4353] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.399 [INFO][4353] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.433 [INFO][4369] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" HandleID="k8s-pod-network.684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.434 [INFO][4369] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.434 [INFO][4369] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.453 [WARNING][4369] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" HandleID="k8s-pod-network.684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.453 [INFO][4369] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" HandleID="k8s-pod-network.684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.455 [INFO][4369] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.466926 containerd[1491]: 2026-04-21 09:59:03.462 [INFO][4353] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:03.469575 containerd[1491]: time="2026-04-21T09:59:03.468525397Z" level=info msg="TearDown network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\" successfully" Apr 21 09:59:03.469575 containerd[1491]: time="2026-04-21T09:59:03.468598038Z" level=info msg="StopPodSandbox for \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\" returns successfully" Apr 21 09:59:03.473773 systemd[1]: run-netns-cni\x2df6606aac\x2d97c7\x2dfd2c\x2d4eba\x2d4163151cea5c.mount: Deactivated successfully. Apr 21 09:59:03.476351 containerd[1491]: time="2026-04-21T09:59:03.475695244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gbnk5,Uid:f14c36e9-166a-4256-9de6-cdefe0504d6e,Namespace:kube-system,Attempt:1,}" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.393 [INFO][4354] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.394 [INFO][4354] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" iface="eth0" netns="/var/run/netns/cni-142a3177-823c-67d2-81bb-ee3d304f3cce" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.395 [INFO][4354] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" iface="eth0" netns="/var/run/netns/cni-142a3177-823c-67d2-81bb-ee3d304f3cce" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.396 [INFO][4354] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" iface="eth0" netns="/var/run/netns/cni-142a3177-823c-67d2-81bb-ee3d304f3cce" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.396 [INFO][4354] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.396 [INFO][4354] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.438 [INFO][4367] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" HandleID="k8s-pod-network.7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.438 [INFO][4367] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.455 [INFO][4367] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.472 [WARNING][4367] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" HandleID="k8s-pod-network.7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.472 [INFO][4367] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" HandleID="k8s-pod-network.7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.476 [INFO][4367] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.480774 containerd[1491]: 2026-04-21 09:59:03.479 [INFO][4354] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:03.481204 containerd[1491]: time="2026-04-21T09:59:03.480937719Z" level=info msg="TearDown network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\" successfully" Apr 21 09:59:03.481204 containerd[1491]: time="2026-04-21T09:59:03.480964959Z" level=info msg="StopPodSandbox for \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\" returns successfully" Apr 21 09:59:03.485589 systemd[1]: run-netns-cni\x2d142a3177\x2d823c\x2d67d2\x2d81bb\x2dee3d304f3cce.mount: Deactivated successfully. Apr 21 09:59:03.486511 containerd[1491]: time="2026-04-21T09:59:03.485808951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-gc82f,Uid:5cf12a5d-710e-475b-9da9-806fe2f83ca0,Namespace:calico-system,Attempt:1,}" Apr 21 09:59:03.690756 systemd-networkd[1388]: calie4c5a34c259: Link UP Apr 21 09:59:03.693141 systemd-networkd[1388]: calie4c5a34c259: Gained carrier Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.560 [INFO][4381] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0 coredns-7d764666f9- kube-system f14c36e9-166a-4256-9de6-cdefe0504d6e 958 0 2026-04-21 09:58:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-a-ee081c135b coredns-7d764666f9-gbnk5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4c5a34c259 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Namespace="kube-system" Pod="coredns-7d764666f9-gbnk5" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.561 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Namespace="kube-system" Pod="coredns-7d764666f9-gbnk5" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.610 [INFO][4405] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" HandleID="k8s-pod-network.e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.634 [INFO][4405] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" HandleID="k8s-pod-network.e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036d870), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-a-ee081c135b", "pod":"coredns-7d764666f9-gbnk5", "timestamp":"2026-04-21 09:59:03.610256889 +0000 UTC"}, Hostname:"ci-4081-3-7-a-ee081c135b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000152f20)} Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.634 [INFO][4405] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.634 [INFO][4405] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.634 [INFO][4405] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-ee081c135b' Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.644 [INFO][4405] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.651 [INFO][4405] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.658 [INFO][4405] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.661 [INFO][4405] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.664 [INFO][4405] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.664 [INFO][4405] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.666 [INFO][4405] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.672 [INFO][4405] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.682 [INFO][4405] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.195/26] block=192.168.124.192/26 handle="k8s-pod-network.e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.683 [INFO][4405] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.195/26] handle="k8s-pod-network.e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.683 [INFO][4405] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.716904 containerd[1491]: 2026-04-21 09:59:03.683 [INFO][4405] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.195/26] IPv6=[] ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" HandleID="k8s-pod-network.e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.718093 containerd[1491]: 2026-04-21 09:59:03.685 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Namespace="kube-system" Pod="coredns-7d764666f9-gbnk5" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f14c36e9-166a-4256-9de6-cdefe0504d6e", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"", Pod:"coredns-7d764666f9-gbnk5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4c5a34c259", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.718093 containerd[1491]: 2026-04-21 09:59:03.686 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.195/32] ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Namespace="kube-system" Pod="coredns-7d764666f9-gbnk5" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.718093 containerd[1491]: 2026-04-21 09:59:03.686 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4c5a34c259 ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Namespace="kube-system" Pod="coredns-7d764666f9-gbnk5" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.718093 containerd[1491]: 2026-04-21 09:59:03.693 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Namespace="kube-system" Pod="coredns-7d764666f9-gbnk5" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.718093 containerd[1491]: 2026-04-21 09:59:03.694 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Namespace="kube-system" Pod="coredns-7d764666f9-gbnk5" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f14c36e9-166a-4256-9de6-cdefe0504d6e", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc", Pod:"coredns-7d764666f9-gbnk5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4c5a34c259", MAC:"de:57:57:f8:54:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.718427 containerd[1491]: 2026-04-21 09:59:03.713 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc" Namespace="kube-system" Pod="coredns-7d764666f9-gbnk5" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:03.720700 systemd-networkd[1388]: cali6d4ccc8657d: Gained IPv6LL Apr 21 09:59:03.747326 containerd[1491]: time="2026-04-21T09:59:03.747105310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:03.749112 containerd[1491]: time="2026-04-21T09:59:03.749020842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:03.749112 containerd[1491]: time="2026-04-21T09:59:03.749068282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:03.749426 containerd[1491]: time="2026-04-21T09:59:03.749302164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:03.788642 systemd[1]: Started cri-containerd-e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc.scope - libcontainer container e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc. Apr 21 09:59:03.821197 systemd-networkd[1388]: califd93cb06382: Link UP Apr 21 09:59:03.824368 systemd-networkd[1388]: califd93cb06382: Gained carrier Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.589 [INFO][4390] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0 goldmane-9f7667bb8- calico-system 5cf12a5d-710e-475b-9da9-806fe2f83ca0 959 0 2026-04-21 09:58:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-a-ee081c135b goldmane-9f7667bb8-gc82f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califd93cb06382 [] [] }} ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Namespace="calico-system" Pod="goldmane-9f7667bb8-gc82f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.589 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Namespace="calico-system" Pod="goldmane-9f7667bb8-gc82f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.629 [INFO][4412] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" HandleID="k8s-pod-network.72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.647 [INFO][4412] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" HandleID="k8s-pod-network.72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-ee081c135b", "pod":"goldmane-9f7667bb8-gc82f", "timestamp":"2026-04-21 09:59:03.629794938 +0000 UTC"}, Hostname:"ci-4081-3-7-a-ee081c135b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.647 [INFO][4412] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.683 [INFO][4412] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.683 [INFO][4412] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-ee081c135b' Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.747 [INFO][4412] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.761 [INFO][4412] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.772 [INFO][4412] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.778 [INFO][4412] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.783 [INFO][4412] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.785 [INFO][4412] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.789 [INFO][4412] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7 Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.799 [INFO][4412] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.808 [INFO][4412] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.196/26] block=192.168.124.192/26 handle="k8s-pod-network.72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.809 [INFO][4412] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.196/26] handle="k8s-pod-network.72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.809 [INFO][4412] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:03.853421 containerd[1491]: 2026-04-21 09:59:03.809 [INFO][4412] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.196/26] IPv6=[] ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" HandleID="k8s-pod-network.72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.854922 containerd[1491]: 2026-04-21 09:59:03.813 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Namespace="calico-system" Pod="goldmane-9f7667bb8-gc82f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"5cf12a5d-710e-475b-9da9-806fe2f83ca0", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"", Pod:"goldmane-9f7667bb8-gc82f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd93cb06382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.854922 containerd[1491]: 2026-04-21 09:59:03.814 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.196/32] ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Namespace="calico-system" Pod="goldmane-9f7667bb8-gc82f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.854922 containerd[1491]: 2026-04-21 09:59:03.814 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd93cb06382 ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Namespace="calico-system" Pod="goldmane-9f7667bb8-gc82f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.854922 containerd[1491]: 2026-04-21 09:59:03.821 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Namespace="calico-system" Pod="goldmane-9f7667bb8-gc82f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.854922 containerd[1491]: 2026-04-21 09:59:03.825 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Namespace="calico-system" Pod="goldmane-9f7667bb8-gc82f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"5cf12a5d-710e-475b-9da9-806fe2f83ca0", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7", Pod:"goldmane-9f7667bb8-gc82f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd93cb06382", MAC:"da:d0:2d:97:a1:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:03.854922 containerd[1491]: 2026-04-21 09:59:03.842 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7" Namespace="calico-system" Pod="goldmane-9f7667bb8-gc82f" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:03.865160 containerd[1491]: time="2026-04-21T09:59:03.864722083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-gbnk5,Uid:f14c36e9-166a-4256-9de6-cdefe0504d6e,Namespace:kube-system,Attempt:1,} returns sandbox id \"e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc\"" Apr 21 09:59:03.876404 containerd[1491]: time="2026-04-21T09:59:03.875913237Z" level=info msg="CreateContainer within sandbox \"e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 09:59:03.894408 containerd[1491]: time="2026-04-21T09:59:03.893509592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:03.894408 containerd[1491]: time="2026-04-21T09:59:03.893577553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:03.894408 containerd[1491]: time="2026-04-21T09:59:03.893604393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:03.894408 containerd[1491]: time="2026-04-21T09:59:03.893711514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:03.902440 containerd[1491]: time="2026-04-21T09:59:03.901991368Z" level=info msg="CreateContainer within sandbox \"e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5179304c4e34329139b74af18335466e031d526627810928519ab12f709a634b\"" Apr 21 09:59:03.905419 containerd[1491]: time="2026-04-21T09:59:03.905303390Z" level=info msg="StartContainer for \"5179304c4e34329139b74af18335466e031d526627810928519ab12f709a634b\"" Apr 21 09:59:03.922212 systemd[1]: Started cri-containerd-72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7.scope - libcontainer container 72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7. Apr 21 09:59:03.971743 systemd[1]: Started cri-containerd-5179304c4e34329139b74af18335466e031d526627810928519ab12f709a634b.scope - libcontainer container 5179304c4e34329139b74af18335466e031d526627810928519ab12f709a634b. Apr 21 09:59:04.039171 containerd[1491]: time="2026-04-21T09:59:04.038670771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-gc82f,Uid:5cf12a5d-710e-475b-9da9-806fe2f83ca0,Namespace:calico-system,Attempt:1,} returns sandbox id \"72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7\"" Apr 21 09:59:04.045533 containerd[1491]: time="2026-04-21T09:59:04.045449413Z" level=info msg="StartContainer for \"5179304c4e34329139b74af18335466e031d526627810928519ab12f709a634b\" returns successfully" Apr 21 09:59:04.304335 containerd[1491]: time="2026-04-21T09:59:04.303130762Z" level=info msg="StopPodSandbox for \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\"" Apr 21 09:59:04.304655 containerd[1491]: time="2026-04-21T09:59:04.304283729Z" level=info msg="StopPodSandbox for \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\"" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.437 [INFO][4598] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.438 [INFO][4598] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" iface="eth0" netns="/var/run/netns/cni-9f0f8bc2-9096-a800-7e1f-88fb027d2495" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.438 [INFO][4598] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" iface="eth0" netns="/var/run/netns/cni-9f0f8bc2-9096-a800-7e1f-88fb027d2495" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.439 [INFO][4598] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" iface="eth0" netns="/var/run/netns/cni-9f0f8bc2-9096-a800-7e1f-88fb027d2495" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.439 [INFO][4598] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.439 [INFO][4598] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.492 [INFO][4611] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" HandleID="k8s-pod-network.90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.492 [INFO][4611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.492 [INFO][4611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.503 [WARNING][4611] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" HandleID="k8s-pod-network.90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.503 [INFO][4611] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" HandleID="k8s-pod-network.90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.506 [INFO][4611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:04.512625 containerd[1491]: 2026-04-21 09:59:04.509 [INFO][4598] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:04.513998 containerd[1491]: time="2026-04-21T09:59:04.512752615Z" level=info msg="TearDown network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\" successfully" Apr 21 09:59:04.513998 containerd[1491]: time="2026-04-21T09:59:04.512785135Z" level=info msg="StopPodSandbox for \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\" returns successfully" Apr 21 09:59:04.521701 systemd[1]: run-netns-cni\x2d9f0f8bc2\x2d9096\x2da800\x2d7e1f\x2d88fb027d2495.mount: Deactivated successfully. Apr 21 09:59:04.525876 containerd[1491]: time="2026-04-21T09:59:04.525334612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5459f6b57d-px5v8,Uid:f066e964-91d8-4473-bead-51ea9c76986c,Namespace:calico-system,Attempt:1,}" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.446 [INFO][4599] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.446 [INFO][4599] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" iface="eth0" netns="/var/run/netns/cni-67396dc3-588a-99f1-c948-f0cc182a60fc" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.447 [INFO][4599] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" iface="eth0" netns="/var/run/netns/cni-67396dc3-588a-99f1-c948-f0cc182a60fc" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.449 [INFO][4599] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" iface="eth0" netns="/var/run/netns/cni-67396dc3-588a-99f1-c948-f0cc182a60fc" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.449 [INFO][4599] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.449 [INFO][4599] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.503 [INFO][4616] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" HandleID="k8s-pod-network.63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.503 [INFO][4616] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.506 [INFO][4616] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.535 [WARNING][4616] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" HandleID="k8s-pod-network.63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.535 [INFO][4616] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" HandleID="k8s-pod-network.63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.541 [INFO][4616] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:04.548211 containerd[1491]: 2026-04-21 09:59:04.545 [INFO][4599] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:04.552534 systemd[1]: run-netns-cni\x2d67396dc3\x2d588a\x2d99f1\x2dc948\x2df0cc182a60fc.mount: Deactivated successfully. Apr 21 09:59:04.569721 containerd[1491]: time="2026-04-21T09:59:04.568339957Z" level=info msg="TearDown network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\" successfully" Apr 21 09:59:04.569721 containerd[1491]: time="2026-04-21T09:59:04.568409198Z" level=info msg="StopPodSandbox for \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\" returns successfully" Apr 21 09:59:04.572674 containerd[1491]: time="2026-04-21T09:59:04.572625264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cp2pp,Uid:da192526-415e-407b-a486-c9ee15869745,Namespace:kube-system,Attempt:1,}" Apr 21 09:59:04.735777 kubelet[2531]: I0421 09:59:04.735699 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-gbnk5" podStartSLOduration=48.735683149 podStartE2EDuration="48.735683149s" podCreationTimestamp="2026-04-21 09:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:59:04.675018455 +0000 UTC m=+53.525149777" watchObservedRunningTime="2026-04-21 09:59:04.735683149 +0000 UTC m=+53.585814471" Apr 21 09:59:04.758761 systemd[1]: Started sshd@7-178.104.214.66:22-50.85.169.122:46550.service - OpenSSH per-connection server daemon (50.85.169.122:46550). Apr 21 09:59:04.893600 systemd-networkd[1388]: calibd063c0e382: Link UP Apr 21 09:59:04.894733 systemd-networkd[1388]: calibd063c0e382: Gained carrier Apr 21 09:59:04.905498 sshd[4662]: Accepted publickey for core from 50.85.169.122 port 46550 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:04.908045 sshd[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:04.918936 systemd-logind[1464]: New session 8 of user core. Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.720 [INFO][4627] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0 calico-apiserver-5459f6b57d- calico-system f066e964-91d8-4473-bead-51ea9c76986c 978 0 2026-04-21 09:58:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5459f6b57d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-a-ee081c135b calico-apiserver-5459f6b57d-px5v8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibd063c0e382 [] [] }} ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-px5v8" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.720 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-px5v8" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.808 [INFO][4664] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" HandleID="k8s-pod-network.16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.835 [INFO][4664] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" HandleID="k8s-pod-network.16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbf60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-ee081c135b", "pod":"calico-apiserver-5459f6b57d-px5v8", "timestamp":"2026-04-21 09:59:04.808207036 +0000 UTC"}, Hostname:"ci-4081-3-7-a-ee081c135b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.836 [INFO][4664] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.836 [INFO][4664] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.836 [INFO][4664] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-ee081c135b' Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.839 [INFO][4664] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.852 [INFO][4664] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.860 [INFO][4664] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.863 [INFO][4664] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.866 [INFO][4664] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.867 [INFO][4664] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.869 [INFO][4664] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22 Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.875 [INFO][4664] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.884 [INFO][4664] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.197/26] block=192.168.124.192/26 handle="k8s-pod-network.16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.884 [INFO][4664] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.197/26] handle="k8s-pod-network.16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.884 [INFO][4664] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:04.923252 containerd[1491]: 2026-04-21 09:59:04.884 [INFO][4664] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.197/26] IPv6=[] ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" HandleID="k8s-pod-network.16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.925286 containerd[1491]: 2026-04-21 09:59:04.887 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-px5v8" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0", GenerateName:"calico-apiserver-5459f6b57d-", Namespace:"calico-system", SelfLink:"", UID:"f066e964-91d8-4473-bead-51ea9c76986c", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5459f6b57d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"", Pod:"calico-apiserver-5459f6b57d-px5v8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd063c0e382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:04.925286 containerd[1491]: 2026-04-21 09:59:04.888 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.197/32] ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-px5v8" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.925286 containerd[1491]: 2026-04-21 09:59:04.888 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd063c0e382 ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-px5v8" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.925286 containerd[1491]: 2026-04-21 09:59:04.895 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-px5v8" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.925286 containerd[1491]: 2026-04-21 09:59:04.896 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-px5v8" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0", GenerateName:"calico-apiserver-5459f6b57d-", Namespace:"calico-system", SelfLink:"", UID:"f066e964-91d8-4473-bead-51ea9c76986c", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5459f6b57d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22", Pod:"calico-apiserver-5459f6b57d-px5v8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd063c0e382", MAC:"3e:af:94:0b:a3:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:04.925286 containerd[1491]: 2026-04-21 09:59:04.920 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-px5v8" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:04.924492 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 09:59:04.958631 containerd[1491]: time="2026-04-21T09:59:04.957921680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:04.958631 containerd[1491]: time="2026-04-21T09:59:04.958071680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:04.958631 containerd[1491]: time="2026-04-21T09:59:04.958104721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:04.960477 containerd[1491]: time="2026-04-21T09:59:04.960410975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:04.991889 systemd[1]: Started cri-containerd-16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22.scope - libcontainer container 16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22. Apr 21 09:59:05.019305 systemd-networkd[1388]: calib6eca454ee2: Link UP Apr 21 09:59:05.020898 systemd-networkd[1388]: calib6eca454ee2: Gained carrier Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.751 [INFO][4637] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0 coredns-7d764666f9- kube-system da192526-415e-407b-a486-c9ee15869745 979 0 2026-04-21 09:58:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-a-ee081c135b coredns-7d764666f9-cp2pp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6eca454ee2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Namespace="kube-system" Pod="coredns-7d764666f9-cp2pp" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.751 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Namespace="kube-system" Pod="coredns-7d764666f9-cp2pp" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.840 [INFO][4672] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" HandleID="k8s-pod-network.709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.852 [INFO][4672] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" HandleID="k8s-pod-network.709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000630350), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-a-ee081c135b", "pod":"coredns-7d764666f9-cp2pp", "timestamp":"2026-04-21 09:59:04.840774997 +0000 UTC"}, Hostname:"ci-4081-3-7-a-ee081c135b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.852 [INFO][4672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.884 [INFO][4672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.884 [INFO][4672] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-ee081c135b' Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.940 [INFO][4672] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.955 [INFO][4672] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.971 [INFO][4672] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.975 [INFO][4672] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.981 [INFO][4672] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.981 [INFO][4672] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.986 [INFO][4672] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:04.994 [INFO][4672] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:05.004 [INFO][4672] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.198/26] block=192.168.124.192/26 handle="k8s-pod-network.709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:05.004 [INFO][4672] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.198/26] handle="k8s-pod-network.709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:05.004 [INFO][4672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:05.071706 containerd[1491]: 2026-04-21 09:59:05.004 [INFO][4672] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.198/26] IPv6=[] ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" HandleID="k8s-pod-network.709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:05.072292 containerd[1491]: 2026-04-21 09:59:05.012 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Namespace="kube-system" Pod="coredns-7d764666f9-cp2pp" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"da192526-415e-407b-a486-c9ee15869745", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"", Pod:"coredns-7d764666f9-cp2pp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6eca454ee2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:05.072292 containerd[1491]: 2026-04-21 09:59:05.012 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.198/32] ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Namespace="kube-system" Pod="coredns-7d764666f9-cp2pp" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:05.072292 containerd[1491]: 2026-04-21 09:59:05.012 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6eca454ee2 ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Namespace="kube-system" Pod="coredns-7d764666f9-cp2pp" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:05.072292 containerd[1491]: 2026-04-21 09:59:05.034 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Namespace="kube-system" Pod="coredns-7d764666f9-cp2pp" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:05.072292 containerd[1491]: 2026-04-21 09:59:05.038 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Namespace="kube-system" Pod="coredns-7d764666f9-cp2pp" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"da192526-415e-407b-a486-c9ee15869745", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d", Pod:"coredns-7d764666f9-cp2pp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6eca454ee2", MAC:"aa:22:cb:13:c3:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:05.076255 containerd[1491]: 2026-04-21 09:59:05.062 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d" Namespace="kube-system" Pod="coredns-7d764666f9-cp2pp" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:05.146340 containerd[1491]: time="2026-04-21T09:59:05.145144858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5459f6b57d-px5v8,Uid:f066e964-91d8-4473-bead-51ea9c76986c,Namespace:calico-system,Attempt:1,} returns sandbox id \"16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22\"" Apr 21 09:59:05.168764 containerd[1491]: time="2026-04-21T09:59:05.168302912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:05.168764 containerd[1491]: time="2026-04-21T09:59:05.168429553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:05.168764 containerd[1491]: time="2026-04-21T09:59:05.168445633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:05.168764 containerd[1491]: time="2026-04-21T09:59:05.168561834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:05.204603 systemd[1]: Started cri-containerd-709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d.scope - libcontainer container 709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d. Apr 21 09:59:05.206179 sshd[4662]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:05.214049 systemd[1]: sshd@7-178.104.214.66:22-50.85.169.122:46550.service: Deactivated successfully. Apr 21 09:59:05.223186 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 09:59:05.226350 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Apr 21 09:59:05.232461 systemd-logind[1464]: Removed session 8. Apr 21 09:59:05.275419 containerd[1491]: time="2026-04-21T09:59:05.275330131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cp2pp,Uid:da192526-415e-407b-a486-c9ee15869745,Namespace:kube-system,Attempt:1,} returns sandbox id \"709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d\"" Apr 21 09:59:05.287259 containerd[1491]: time="2026-04-21T09:59:05.287194799Z" level=info msg="CreateContainer within sandbox \"709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 09:59:05.304451 containerd[1491]: time="2026-04-21T09:59:05.303615454Z" level=info msg="StopPodSandbox for \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\"" Apr 21 09:59:05.310694 containerd[1491]: time="2026-04-21T09:59:05.310105172Z" level=info msg="CreateContainer within sandbox \"709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1de24cb8b6e9b159db8f7ec79b3b889ef4eb4c3655b8036d63811ccdf67389e\"" Apr 21 09:59:05.311939 containerd[1491]: time="2026-04-21T09:59:05.311758341Z" level=info msg="StartContainer for \"f1de24cb8b6e9b159db8f7ec79b3b889ef4eb4c3655b8036d63811ccdf67389e\"" Apr 21 09:59:05.388315 systemd[1]: Started cri-containerd-f1de24cb8b6e9b159db8f7ec79b3b889ef4eb4c3655b8036d63811ccdf67389e.scope - libcontainer container f1de24cb8b6e9b159db8f7ec79b3b889ef4eb4c3655b8036d63811ccdf67389e. Apr 21 09:59:05.448786 systemd-networkd[1388]: calie4c5a34c259: Gained IPv6LL Apr 21 09:59:05.455242 containerd[1491]: time="2026-04-21T09:59:05.455179130Z" level=info msg="StartContainer for \"f1de24cb8b6e9b159db8f7ec79b3b889ef4eb4c3655b8036d63811ccdf67389e\" returns successfully" Apr 21 09:59:05.521637 containerd[1491]: time="2026-04-21T09:59:05.521009071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:05.523578 containerd[1491]: time="2026-04-21T09:59:05.523514805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 21 09:59:05.527202 containerd[1491]: time="2026-04-21T09:59:05.526207421Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:05.532262 containerd[1491]: time="2026-04-21T09:59:05.531598652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:05.533706 containerd[1491]: time="2026-04-21T09:59:05.531951014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.746569004s" Apr 21 09:59:05.535092 containerd[1491]: time="2026-04-21T09:59:05.533193381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 21 09:59:05.536894 containerd[1491]: time="2026-04-21T09:59:05.536849563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 09:59:05.542113 containerd[1491]: time="2026-04-21T09:59:05.542072633Z" level=info msg="CreateContainer within sandbox \"35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 09:59:05.584993 containerd[1491]: time="2026-04-21T09:59:05.584778480Z" level=info msg="CreateContainer within sandbox \"35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"abd548dd539b1e7c022482894fc207f2f6cbafe5ac0a58360bd49a31086a311a\"" Apr 21 09:59:05.587707 containerd[1491]: time="2026-04-21T09:59:05.587577576Z" level=info msg="StartContainer for \"abd548dd539b1e7c022482894fc207f2f6cbafe5ac0a58360bd49a31086a311a\"" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.464 [INFO][4828] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.464 [INFO][4828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" iface="eth0" netns="/var/run/netns/cni-7b3d876f-1360-d760-9576-2c139e4f42b2" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.464 [INFO][4828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" iface="eth0" netns="/var/run/netns/cni-7b3d876f-1360-d760-9576-2c139e4f42b2" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.465 [INFO][4828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" iface="eth0" netns="/var/run/netns/cni-7b3d876f-1360-d760-9576-2c139e4f42b2" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.465 [INFO][4828] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.465 [INFO][4828] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.579 [INFO][4867] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" HandleID="k8s-pod-network.fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.579 [INFO][4867] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.579 [INFO][4867] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.606 [WARNING][4867] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" HandleID="k8s-pod-network.fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.606 [INFO][4867] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" HandleID="k8s-pod-network.fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.613 [INFO][4867] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:05.629138 containerd[1491]: 2026-04-21 09:59:05.620 [INFO][4828] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:05.646066 containerd[1491]: time="2026-04-21T09:59:05.630037461Z" level=info msg="TearDown network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\" successfully" Apr 21 09:59:05.646066 containerd[1491]: time="2026-04-21T09:59:05.630075581Z" level=info msg="StopPodSandbox for \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\" returns successfully" Apr 21 09:59:05.642263 systemd[1]: run-netns-cni\x2d7b3d876f\x2d1360\x2dd760\x2d9576\x2d2c139e4f42b2.mount: Deactivated successfully. Apr 21 09:59:05.651630 systemd[1]: Started cri-containerd-abd548dd539b1e7c022482894fc207f2f6cbafe5ac0a58360bd49a31086a311a.scope - libcontainer container abd548dd539b1e7c022482894fc207f2f6cbafe5ac0a58360bd49a31086a311a. Apr 21 09:59:05.651852 containerd[1491]: time="2026-04-21T09:59:05.647320521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5459f6b57d-nbrdx,Uid:6fb1502d-d737-4fec-9a69-7870084d206a,Namespace:calico-system,Attempt:1,}" Apr 21 09:59:05.695208 kubelet[2531]: I0421 09:59:05.695109 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-cp2pp" podStartSLOduration=49.695091997 podStartE2EDuration="49.695091997s" podCreationTimestamp="2026-04-21 09:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 09:59:05.692687383 +0000 UTC m=+54.542818665" watchObservedRunningTime="2026-04-21 09:59:05.695091997 +0000 UTC m=+54.545223319" Apr 21 09:59:05.751775 containerd[1491]: time="2026-04-21T09:59:05.750030915Z" level=info msg="StartContainer for \"abd548dd539b1e7c022482894fc207f2f6cbafe5ac0a58360bd49a31086a311a\" returns successfully" Apr 21 09:59:05.770521 systemd-networkd[1388]: califd93cb06382: Gained IPv6LL Apr 21 09:59:05.893234 systemd-networkd[1388]: cali008e5664097: Link UP Apr 21 09:59:05.894643 systemd-networkd[1388]: cali008e5664097: Gained carrier Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.789 [INFO][4908] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0 calico-apiserver-5459f6b57d- calico-system 6fb1502d-d737-4fec-9a69-7870084d206a 1027 0 2026-04-21 09:58:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5459f6b57d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-a-ee081c135b calico-apiserver-5459f6b57d-nbrdx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali008e5664097 [] [] }} ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-nbrdx" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.789 [INFO][4908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-nbrdx" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.829 [INFO][4931] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" HandleID="k8s-pod-network.5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.842 [INFO][4931] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" HandleID="k8s-pod-network.5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273250), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-ee081c135b", "pod":"calico-apiserver-5459f6b57d-nbrdx", "timestamp":"2026-04-21 09:59:05.829808296 +0000 UTC"}, Hostname:"ci-4081-3-7-a-ee081c135b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030edc0)} Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.842 [INFO][4931] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.842 [INFO][4931] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.842 [INFO][4931] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-ee081c135b' Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.846 [INFO][4931] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.855 [INFO][4931] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.862 [INFO][4931] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.864 [INFO][4931] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.868 [INFO][4931] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.868 [INFO][4931] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.871 [INFO][4931] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3 Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.877 [INFO][4931] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.887 [INFO][4931] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.199/26] block=192.168.124.192/26 handle="k8s-pod-network.5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.887 [INFO][4931] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.199/26] handle="k8s-pod-network.5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.887 [INFO][4931] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:05.922359 containerd[1491]: 2026-04-21 09:59:05.887 [INFO][4931] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.199/26] IPv6=[] ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" HandleID="k8s-pod-network.5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.923169 containerd[1491]: 2026-04-21 09:59:05.891 [INFO][4908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-nbrdx" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0", GenerateName:"calico-apiserver-5459f6b57d-", Namespace:"calico-system", SelfLink:"", UID:"6fb1502d-d737-4fec-9a69-7870084d206a", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5459f6b57d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"", Pod:"calico-apiserver-5459f6b57d-nbrdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali008e5664097", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:05.923169 containerd[1491]: 2026-04-21 09:59:05.891 [INFO][4908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.199/32] ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-nbrdx" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.923169 containerd[1491]: 2026-04-21 09:59:05.891 [INFO][4908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali008e5664097 ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-nbrdx" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.923169 containerd[1491]: 2026-04-21 09:59:05.895 [INFO][4908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-nbrdx" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.923169 containerd[1491]: 2026-04-21 09:59:05.896 [INFO][4908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-nbrdx" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0", GenerateName:"calico-apiserver-5459f6b57d-", Namespace:"calico-system", SelfLink:"", UID:"6fb1502d-d737-4fec-9a69-7870084d206a", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5459f6b57d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3", Pod:"calico-apiserver-5459f6b57d-nbrdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali008e5664097", MAC:"36:8e:56:dd:e5:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:05.923169 containerd[1491]: 2026-04-21 09:59:05.919 [INFO][4908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3" Namespace="calico-system" Pod="calico-apiserver-5459f6b57d-nbrdx" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:05.951236 containerd[1491]: time="2026-04-21T09:59:05.951066877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:05.951236 containerd[1491]: time="2026-04-21T09:59:05.951149677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:05.952107 containerd[1491]: time="2026-04-21T09:59:05.951206958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:05.953040 containerd[1491]: time="2026-04-21T09:59:05.952761287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:05.978675 systemd[1]: Started cri-containerd-5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3.scope - libcontainer container 5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3. Apr 21 09:59:06.038301 containerd[1491]: time="2026-04-21T09:59:06.038168727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5459f6b57d-nbrdx,Uid:6fb1502d-d737-4fec-9a69-7870084d206a,Namespace:calico-system,Attempt:1,} returns sandbox id \"5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3\"" Apr 21 09:59:06.152675 systemd-networkd[1388]: calibd063c0e382: Gained IPv6LL Apr 21 09:59:06.856661 systemd-networkd[1388]: calib6eca454ee2: Gained IPv6LL Apr 21 09:59:06.984615 systemd-networkd[1388]: cali008e5664097: Gained IPv6LL Apr 21 09:59:07.308163 containerd[1491]: time="2026-04-21T09:59:07.307606102Z" level=info msg="StopPodSandbox for \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\"" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.399 [INFO][5029] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.400 [INFO][5029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" iface="eth0" netns="/var/run/netns/cni-67b7ef05-0799-1289-70b8-46d8ae661c8a" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.400 [INFO][5029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" iface="eth0" netns="/var/run/netns/cni-67b7ef05-0799-1289-70b8-46d8ae661c8a" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.402 [INFO][5029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" iface="eth0" netns="/var/run/netns/cni-67b7ef05-0799-1289-70b8-46d8ae661c8a" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.402 [INFO][5029] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.402 [INFO][5029] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.448 [INFO][5036] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" HandleID="k8s-pod-network.78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.448 [INFO][5036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.448 [INFO][5036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.470 [WARNING][5036] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" HandleID="k8s-pod-network.78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.471 [INFO][5036] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" HandleID="k8s-pod-network.78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.473 [INFO][5036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:07.483360 containerd[1491]: 2026-04-21 09:59:07.478 [INFO][5029] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:07.487116 containerd[1491]: time="2026-04-21T09:59:07.486975134Z" level=info msg="TearDown network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\" successfully" Apr 21 09:59:07.487116 containerd[1491]: time="2026-04-21T09:59:07.487015814Z" level=info msg="StopPodSandbox for \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\" returns successfully" Apr 21 09:59:07.490866 containerd[1491]: time="2026-04-21T09:59:07.490733313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f77bf9b9-58b2r,Uid:e864a05e-cd95-4bff-bcff-cd119ea67d7b,Namespace:calico-system,Attempt:1,}" Apr 21 09:59:07.491584 systemd[1]: run-netns-cni\x2d67b7ef05\x2d0799\x2d1289\x2d70b8\x2d46d8ae661c8a.mount: Deactivated successfully. Apr 21 09:59:07.671331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280459818.mount: Deactivated successfully. Apr 21 09:59:07.707071 systemd-networkd[1388]: cali56c02b99ff6: Link UP Apr 21 09:59:07.708101 systemd-networkd[1388]: cali56c02b99ff6: Gained carrier Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.572 [INFO][5046] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0 calico-kube-controllers-77f77bf9b9- calico-system e864a05e-cd95-4bff-bcff-cd119ea67d7b 1055 0 2026-04-21 09:58:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77f77bf9b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-a-ee081c135b calico-kube-controllers-77f77bf9b9-58b2r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali56c02b99ff6 [] [] }} ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Namespace="calico-system" Pod="calico-kube-controllers-77f77bf9b9-58b2r" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.572 [INFO][5046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Namespace="calico-system" Pod="calico-kube-controllers-77f77bf9b9-58b2r" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.618 [INFO][5055] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" HandleID="k8s-pod-network.f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.639 [INFO][5055] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" HandleID="k8s-pod-network.f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f34c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-ee081c135b", "pod":"calico-kube-controllers-77f77bf9b9-58b2r", "timestamp":"2026-04-21 09:59:07.618294921 +0000 UTC"}, Hostname:"ci-4081-3-7-a-ee081c135b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b2dc0)} Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.640 [INFO][5055] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.640 [INFO][5055] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.640 [INFO][5055] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-ee081c135b' Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.644 [INFO][5055] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.651 [INFO][5055] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.661 [INFO][5055] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.664 [INFO][5055] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.672 [INFO][5055] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.672 [INFO][5055] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.675 [INFO][5055] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1 Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.682 [INFO][5055] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.693 [INFO][5055] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.200/26] block=192.168.124.192/26 handle="k8s-pod-network.f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.693 [INFO][5055] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.200/26] handle="k8s-pod-network.f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" host="ci-4081-3-7-a-ee081c135b" Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.693 [INFO][5055] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:07.741418 containerd[1491]: 2026-04-21 09:59:07.693 [INFO][5055] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.200/26] IPv6=[] ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" HandleID="k8s-pod-network.f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.742017 containerd[1491]: 2026-04-21 09:59:07.697 [INFO][5046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Namespace="calico-system" Pod="calico-kube-controllers-77f77bf9b9-58b2r" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0", GenerateName:"calico-kube-controllers-77f77bf9b9-", Namespace:"calico-system", SelfLink:"", UID:"e864a05e-cd95-4bff-bcff-cd119ea67d7b", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f77bf9b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"", Pod:"calico-kube-controllers-77f77bf9b9-58b2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56c02b99ff6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:07.742017 containerd[1491]: 2026-04-21 09:59:07.698 [INFO][5046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.200/32] ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Namespace="calico-system" Pod="calico-kube-controllers-77f77bf9b9-58b2r" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.742017 containerd[1491]: 2026-04-21 09:59:07.698 [INFO][5046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56c02b99ff6 ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Namespace="calico-system" Pod="calico-kube-controllers-77f77bf9b9-58b2r" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.742017 containerd[1491]: 2026-04-21 09:59:07.709 [INFO][5046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Namespace="calico-system" Pod="calico-kube-controllers-77f77bf9b9-58b2r" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.742017 containerd[1491]: 2026-04-21 09:59:07.711 [INFO][5046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Namespace="calico-system" Pod="calico-kube-controllers-77f77bf9b9-58b2r" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0", GenerateName:"calico-kube-controllers-77f77bf9b9-", Namespace:"calico-system", SelfLink:"", UID:"e864a05e-cd95-4bff-bcff-cd119ea67d7b", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f77bf9b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1", Pod:"calico-kube-controllers-77f77bf9b9-58b2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56c02b99ff6", MAC:"2e:f5:33:c8:86:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:07.742017 containerd[1491]: 2026-04-21 09:59:07.732 [INFO][5046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1" Namespace="calico-system" Pod="calico-kube-controllers-77f77bf9b9-58b2r" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:07.799508 containerd[1491]: time="2026-04-21T09:59:07.795921063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 09:59:07.799508 containerd[1491]: time="2026-04-21T09:59:07.796134664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 09:59:07.799508 containerd[1491]: time="2026-04-21T09:59:07.796149464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:07.799508 containerd[1491]: time="2026-04-21T09:59:07.796506866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 09:59:07.834591 systemd[1]: Started cri-containerd-f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1.scope - libcontainer container f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1. Apr 21 09:59:07.909763 containerd[1491]: time="2026-04-21T09:59:07.909721041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f77bf9b9-58b2r,Uid:e864a05e-cd95-4bff-bcff-cd119ea67d7b,Namespace:calico-system,Attempt:1,} returns sandbox id \"f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1\"" Apr 21 09:59:08.168790 containerd[1491]: time="2026-04-21T09:59:08.168719304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:08.170089 containerd[1491]: time="2026-04-21T09:59:08.170048150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 21 09:59:08.171617 containerd[1491]: time="2026-04-21T09:59:08.171128395Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:08.174643 containerd[1491]: time="2026-04-21T09:59:08.174562972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:08.175946 containerd[1491]: time="2026-04-21T09:59:08.175903178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.638916735s" Apr 21 09:59:08.175946 containerd[1491]: time="2026-04-21T09:59:08.175943018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 21 09:59:08.179852 containerd[1491]: time="2026-04-21T09:59:08.179800677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 09:59:08.182915 containerd[1491]: time="2026-04-21T09:59:08.182788211Z" level=info msg="CreateContainer within sandbox \"72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 09:59:08.209206 containerd[1491]: time="2026-04-21T09:59:08.207884450Z" level=info msg="CreateContainer within sandbox \"72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0781e29e7d60d40456ad3db3acae093cda8e040f7882837351f8c183e9e1622d\"" Apr 21 09:59:08.210771 containerd[1491]: time="2026-04-21T09:59:08.210079461Z" level=info msg="StartContainer for \"0781e29e7d60d40456ad3db3acae093cda8e040f7882837351f8c183e9e1622d\"" Apr 21 09:59:08.238630 systemd[1]: Started cri-containerd-0781e29e7d60d40456ad3db3acae093cda8e040f7882837351f8c183e9e1622d.scope - libcontainer container 0781e29e7d60d40456ad3db3acae093cda8e040f7882837351f8c183e9e1622d. Apr 21 09:59:08.281408 containerd[1491]: time="2026-04-21T09:59:08.281336600Z" level=info msg="StartContainer for \"0781e29e7d60d40456ad3db3acae093cda8e040f7882837351f8c183e9e1622d\" returns successfully" Apr 21 09:59:08.731189 kubelet[2531]: I0421 09:59:08.730733 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-gc82f" podStartSLOduration=34.596610436 podStartE2EDuration="38.730717061s" podCreationTimestamp="2026-04-21 09:58:30 +0000 UTC" firstStartedPulling="2026-04-21 09:59:04.043626442 +0000 UTC m=+52.893757764" lastFinishedPulling="2026-04-21 09:59:08.177733107 +0000 UTC m=+57.027864389" observedRunningTime="2026-04-21 09:59:08.730142178 +0000 UTC m=+57.580273540" watchObservedRunningTime="2026-04-21 09:59:08.730717061 +0000 UTC m=+57.580848383" Apr 21 09:59:09.353100 systemd-networkd[1388]: cali56c02b99ff6: Gained IPv6LL Apr 21 09:59:10.235261 systemd[1]: Started sshd@8-178.104.214.66:22-50.85.169.122:46856.service - OpenSSH per-connection server daemon (50.85.169.122:46856). Apr 21 09:59:10.373957 sshd[5218]: Accepted publickey for core from 50.85.169.122 port 46856 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:10.376697 sshd[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:10.384418 systemd-logind[1464]: New session 9 of user core. Apr 21 09:59:10.389624 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 09:59:10.607914 sshd[5218]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:10.613598 systemd[1]: sshd@8-178.104.214.66:22-50.85.169.122:46856.service: Deactivated successfully. Apr 21 09:59:10.618536 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 09:59:10.620582 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Apr 21 09:59:10.622844 systemd-logind[1464]: Removed session 9. Apr 21 09:59:10.783939 containerd[1491]: time="2026-04-21T09:59:10.782597085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:10.783939 containerd[1491]: time="2026-04-21T09:59:10.783855411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 21 09:59:10.784871 containerd[1491]: time="2026-04-21T09:59:10.784823455Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:10.788998 containerd[1491]: time="2026-04-21T09:59:10.788957472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:10.790402 containerd[1491]: time="2026-04-21T09:59:10.789727155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.609731077s" Apr 21 09:59:10.791365 containerd[1491]: time="2026-04-21T09:59:10.791342122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 21 09:59:10.792588 containerd[1491]: time="2026-04-21T09:59:10.792556927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 09:59:10.798494 containerd[1491]: time="2026-04-21T09:59:10.798439272Z" level=info msg="CreateContainer within sandbox \"16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 09:59:10.820875 containerd[1491]: time="2026-04-21T09:59:10.820778965Z" level=info msg="CreateContainer within sandbox \"16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cd443326cd4377de60fce901f2ecc61fc2a1672374445d3a60dc04384cdd8672\"" Apr 21 09:59:10.821870 containerd[1491]: time="2026-04-21T09:59:10.821775209Z" level=info msg="StartContainer for \"cd443326cd4377de60fce901f2ecc61fc2a1672374445d3a60dc04384cdd8672\"" Apr 21 09:59:10.891803 systemd[1]: Started cri-containerd-cd443326cd4377de60fce901f2ecc61fc2a1672374445d3a60dc04384cdd8672.scope - libcontainer container cd443326cd4377de60fce901f2ecc61fc2a1672374445d3a60dc04384cdd8672. Apr 21 09:59:10.934759 containerd[1491]: time="2026-04-21T09:59:10.934083799Z" level=info msg="StartContainer for \"cd443326cd4377de60fce901f2ecc61fc2a1672374445d3a60dc04384cdd8672\" returns successfully" Apr 21 09:59:11.285812 containerd[1491]: time="2026-04-21T09:59:11.285662517Z" level=info msg="StopPodSandbox for \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\"" Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.368 [WARNING][5301] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0", GenerateName:"calico-apiserver-5459f6b57d-", Namespace:"calico-system", SelfLink:"", UID:"6fb1502d-d737-4fec-9a69-7870084d206a", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5459f6b57d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3", Pod:"calico-apiserver-5459f6b57d-nbrdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali008e5664097", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.370 [INFO][5301] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.370 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" iface="eth0" netns="" Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.370 [INFO][5301] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.370 [INFO][5301] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.404 [INFO][5310] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" HandleID="k8s-pod-network.fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.404 [INFO][5310] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.404 [INFO][5310] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.416 [WARNING][5310] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" HandleID="k8s-pod-network.fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.416 [INFO][5310] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" HandleID="k8s-pod-network.fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.418 [INFO][5310] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:11.422947 containerd[1491]: 2026-04-21 09:59:11.420 [INFO][5301] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:11.422947 containerd[1491]: time="2026-04-21T09:59:11.422817895Z" level=info msg="TearDown network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\" successfully" Apr 21 09:59:11.422947 containerd[1491]: time="2026-04-21T09:59:11.422857575Z" level=info msg="StopPodSandbox for \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\" returns successfully" Apr 21 09:59:11.424772 containerd[1491]: time="2026-04-21T09:59:11.424488782Z" level=info msg="RemovePodSandbox for \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\"" Apr 21 09:59:11.437426 containerd[1491]: time="2026-04-21T09:59:11.435114743Z" level=info msg="Forcibly stopping sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\"" Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.495 [WARNING][5325] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0", GenerateName:"calico-apiserver-5459f6b57d-", Namespace:"calico-system", SelfLink:"", UID:"6fb1502d-d737-4fec-9a69-7870084d206a", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5459f6b57d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3", Pod:"calico-apiserver-5459f6b57d-nbrdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali008e5664097", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.495 [INFO][5325] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.495 [INFO][5325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" iface="eth0" netns="" Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.495 [INFO][5325] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.495 [INFO][5325] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.518 [INFO][5332] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" HandleID="k8s-pod-network.fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.518 [INFO][5332] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.518 [INFO][5332] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.527 [WARNING][5332] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" HandleID="k8s-pod-network.fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.528 [INFO][5332] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" HandleID="k8s-pod-network.fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--nbrdx-eth0" Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.530 [INFO][5332] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:11.533294 containerd[1491]: 2026-04-21 09:59:11.531 [INFO][5325] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263" Apr 21 09:59:11.533905 containerd[1491]: time="2026-04-21T09:59:11.533870371Z" level=info msg="TearDown network for sandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\" successfully" Apr 21 09:59:11.546657 containerd[1491]: time="2026-04-21T09:59:11.546496180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:11.546896 containerd[1491]: time="2026-04-21T09:59:11.546870422Z" level=info msg="RemovePodSandbox \"fd7463de2df18482e3ff6fc17a4172a4747ed3a4af30d8a9f7cf4e09dd73c263\" returns successfully" Apr 21 09:59:11.547967 containerd[1491]: time="2026-04-21T09:59:11.547520944Z" level=info msg="StopPodSandbox for \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\"" Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.603 [WARNING][5347] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.604 [INFO][5347] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.604 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" iface="eth0" netns="" Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.604 [INFO][5347] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.604 [INFO][5347] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.652 [INFO][5355] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" HandleID="k8s-pod-network.887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.654 [INFO][5355] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.654 [INFO][5355] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.663 [WARNING][5355] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" HandleID="k8s-pod-network.887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.664 [INFO][5355] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" HandleID="k8s-pod-network.887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.666 [INFO][5355] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:11.669577 containerd[1491]: 2026-04-21 09:59:11.668 [INFO][5347] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:59:11.670230 containerd[1491]: time="2026-04-21T09:59:11.670093665Z" level=info msg="TearDown network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\" successfully" Apr 21 09:59:11.670230 containerd[1491]: time="2026-04-21T09:59:11.670123506Z" level=info msg="StopPodSandbox for \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\" returns successfully" Apr 21 09:59:11.670910 containerd[1491]: time="2026-04-21T09:59:11.670763148Z" level=info msg="RemovePodSandbox for \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\"" Apr 21 09:59:11.670910 containerd[1491]: time="2026-04-21T09:59:11.670867748Z" level=info msg="Forcibly stopping sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\"" Apr 21 09:59:11.750509 kubelet[2531]: I0421 09:59:11.750111 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5459f6b57d-px5v8" podStartSLOduration=36.108885105 podStartE2EDuration="41.750096419s" podCreationTimestamp="2026-04-21 09:58:30 +0000 UTC" firstStartedPulling="2026-04-21 09:59:05.151033492 +0000 UTC m=+54.001164814" lastFinishedPulling="2026-04-21 09:59:10.792244806 +0000 UTC m=+59.642376128" observedRunningTime="2026-04-21 09:59:11.748294492 +0000 UTC m=+60.598425814" watchObservedRunningTime="2026-04-21 09:59:11.750096419 +0000 UTC m=+60.600227741" Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.712 [WARNING][5371] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" WorkloadEndpoint="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.713 [INFO][5371] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.713 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" iface="eth0" netns="" Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.713 [INFO][5371] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.713 [INFO][5371] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.751 [INFO][5378] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" HandleID="k8s-pod-network.887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.752 [INFO][5378] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.752 [INFO][5378] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.768 [WARNING][5378] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" HandleID="k8s-pod-network.887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.768 [INFO][5378] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" HandleID="k8s-pod-network.887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Workload="ci--4081--3--7--a--ee081c135b-k8s-whisker--5c9bb4cfc--2r98m-eth0" Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.771 [INFO][5378] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:11.777593 containerd[1491]: 2026-04-21 09:59:11.774 [INFO][5371] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310" Apr 21 09:59:11.777986 containerd[1491]: time="2026-04-21T09:59:11.777633367Z" level=info msg="TearDown network for sandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\" successfully" Apr 21 09:59:11.789661 containerd[1491]: time="2026-04-21T09:59:11.789615695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:11.790289 containerd[1491]: time="2026-04-21T09:59:11.789691295Z" level=info msg="RemovePodSandbox \"887aeaae58424028481ab3ba084c138daf74ee4c855c44d73eea8d09c01ab310\" returns successfully" Apr 21 09:59:11.790815 containerd[1491]: time="2026-04-21T09:59:11.790502298Z" level=info msg="StopPodSandbox for \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\"" Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.835 [WARNING][5394] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0", GenerateName:"calico-kube-controllers-77f77bf9b9-", Namespace:"calico-system", SelfLink:"", UID:"e864a05e-cd95-4bff-bcff-cd119ea67d7b", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f77bf9b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1", Pod:"calico-kube-controllers-77f77bf9b9-58b2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56c02b99ff6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.836 [INFO][5394] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.836 [INFO][5394] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" iface="eth0" netns="" Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.836 [INFO][5394] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.836 [INFO][5394] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.863 [INFO][5402] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" HandleID="k8s-pod-network.78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.863 [INFO][5402] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.863 [INFO][5402] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.877 [WARNING][5402] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" HandleID="k8s-pod-network.78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.877 [INFO][5402] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" HandleID="k8s-pod-network.78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.879 [INFO][5402] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:11.884804 containerd[1491]: 2026-04-21 09:59:11.882 [INFO][5394] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:11.884804 containerd[1491]: time="2026-04-21T09:59:11.884669388Z" level=info msg="TearDown network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\" successfully" Apr 21 09:59:11.884804 containerd[1491]: time="2026-04-21T09:59:11.884697588Z" level=info msg="StopPodSandbox for \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\" returns successfully" Apr 21 09:59:11.886888 containerd[1491]: time="2026-04-21T09:59:11.886835396Z" level=info msg="RemovePodSandbox for \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\"" Apr 21 09:59:11.886888 containerd[1491]: time="2026-04-21T09:59:11.886875756Z" level=info msg="Forcibly stopping sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\"" Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.928 [WARNING][5416] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0", GenerateName:"calico-kube-controllers-77f77bf9b9-", Namespace:"calico-system", SelfLink:"", UID:"e864a05e-cd95-4bff-bcff-cd119ea67d7b", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f77bf9b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1", Pod:"calico-kube-controllers-77f77bf9b9-58b2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56c02b99ff6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.929 [INFO][5416] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.929 [INFO][5416] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" iface="eth0" netns="" Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.929 [INFO][5416] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.929 [INFO][5416] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.952 [INFO][5423] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" HandleID="k8s-pod-network.78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.953 [INFO][5423] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.953 [INFO][5423] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.964 [WARNING][5423] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" HandleID="k8s-pod-network.78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.964 [INFO][5423] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" HandleID="k8s-pod-network.78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--kube--controllers--77f77bf9b9--58b2r-eth0" Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.966 [INFO][5423] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:11.970284 containerd[1491]: 2026-04-21 09:59:11.968 [INFO][5416] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902" Apr 21 09:59:11.970775 containerd[1491]: time="2026-04-21T09:59:11.970334044Z" level=info msg="TearDown network for sandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\" successfully" Apr 21 09:59:11.975398 containerd[1491]: time="2026-04-21T09:59:11.974821101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:11.975398 containerd[1491]: time="2026-04-21T09:59:11.974946902Z" level=info msg="RemovePodSandbox \"78c6fe9b1009170525042e0813cc636bec5bac7260d91049d85756119b89a902\" returns successfully" Apr 21 09:59:11.976006 containerd[1491]: time="2026-04-21T09:59:11.975980786Z" level=info msg="StopPodSandbox for \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\"" Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.027 [WARNING][5437] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88a4ed3c-b6a4-4138-826b-621c4a7e3007", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2", Pod:"csi-node-driver-h5t5f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d4ccc8657d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.027 [INFO][5437] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.027 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" iface="eth0" netns="" Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.027 [INFO][5437] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.027 [INFO][5437] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.051 [INFO][5444] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" HandleID="k8s-pod-network.a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.052 [INFO][5444] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.052 [INFO][5444] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.062 [WARNING][5444] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" HandleID="k8s-pod-network.a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.062 [INFO][5444] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" HandleID="k8s-pod-network.a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.065 [INFO][5444] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:12.069283 containerd[1491]: 2026-04-21 09:59:12.067 [INFO][5437] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:12.070323 containerd[1491]: time="2026-04-21T09:59:12.069336575Z" level=info msg="TearDown network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\" successfully" Apr 21 09:59:12.070323 containerd[1491]: time="2026-04-21T09:59:12.069363976Z" level=info msg="StopPodSandbox for \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\" returns successfully" Apr 21 09:59:12.070539 containerd[1491]: time="2026-04-21T09:59:12.070481900Z" level=info msg="RemovePodSandbox for \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\"" Apr 21 09:59:12.070539 containerd[1491]: time="2026-04-21T09:59:12.070513020Z" level=info msg="Forcibly stopping sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\"" Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.125 [WARNING][5459] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88a4ed3c-b6a4-4138-826b-621c4a7e3007", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2", Pod:"csi-node-driver-h5t5f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d4ccc8657d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.126 [INFO][5459] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.126 [INFO][5459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" iface="eth0" netns="" Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.126 [INFO][5459] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.126 [INFO][5459] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.155 [INFO][5466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" HandleID="k8s-pod-network.a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.155 [INFO][5466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.155 [INFO][5466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.168 [WARNING][5466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" HandleID="k8s-pod-network.a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.168 [INFO][5466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" HandleID="k8s-pod-network.a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Workload="ci--4081--3--7--a--ee081c135b-k8s-csi--node--driver--h5t5f-eth0" Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.170 [INFO][5466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:12.174907 containerd[1491]: 2026-04-21 09:59:12.173 [INFO][5459] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf" Apr 21 09:59:12.177304 containerd[1491]: time="2026-04-21T09:59:12.175956568Z" level=info msg="TearDown network for sandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\" successfully" Apr 21 09:59:12.182353 containerd[1491]: time="2026-04-21T09:59:12.182183031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:12.182353 containerd[1491]: time="2026-04-21T09:59:12.182295671Z" level=info msg="RemovePodSandbox \"a3a7ba5cd540ce025c2249402764097fa63245920d81677f4d27c30512df44cf\" returns successfully" Apr 21 09:59:12.183338 containerd[1491]: time="2026-04-21T09:59:12.183313795Z" level=info msg="StopPodSandbox for \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\"" Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.248 [WARNING][5481] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"da192526-415e-407b-a486-c9ee15869745", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d", Pod:"coredns-7d764666f9-cp2pp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6eca454ee2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.249 [INFO][5481] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.249 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" iface="eth0" netns="" Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.249 [INFO][5481] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.249 [INFO][5481] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.289 [INFO][5489] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" HandleID="k8s-pod-network.63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.291 [INFO][5489] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.291 [INFO][5489] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.307 [WARNING][5489] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" HandleID="k8s-pod-network.63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.307 [INFO][5489] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" HandleID="k8s-pod-network.63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.310 [INFO][5489] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:12.317445 containerd[1491]: 2026-04-21 09:59:12.314 [INFO][5481] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:12.317445 containerd[1491]: time="2026-04-21T09:59:12.317215047Z" level=info msg="TearDown network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\" successfully" Apr 21 09:59:12.317445 containerd[1491]: time="2026-04-21T09:59:12.317251768Z" level=info msg="StopPodSandbox for \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\" returns successfully" Apr 21 09:59:12.319702 containerd[1491]: time="2026-04-21T09:59:12.319439376Z" level=info msg="RemovePodSandbox for \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\"" Apr 21 09:59:12.319702 containerd[1491]: time="2026-04-21T09:59:12.319517456Z" level=info msg="Forcibly stopping sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\"" Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.372 [WARNING][5503] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"da192526-415e-407b-a486-c9ee15869745", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"709963f9b8bf8ef0121a58cfde937b727d38a2cb82005748431005cc066f115d", Pod:"coredns-7d764666f9-cp2pp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6eca454ee2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.373 [INFO][5503] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.373 [INFO][5503] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" iface="eth0" netns="" Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.373 [INFO][5503] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.373 [INFO][5503] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.401 [INFO][5511] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" HandleID="k8s-pod-network.63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.401 [INFO][5511] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.401 [INFO][5511] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.412 [WARNING][5511] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" HandleID="k8s-pod-network.63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.412 [INFO][5511] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" HandleID="k8s-pod-network.63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--cp2pp-eth0" Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.415 [INFO][5511] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:12.420452 containerd[1491]: 2026-04-21 09:59:12.417 [INFO][5503] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74" Apr 21 09:59:12.420452 containerd[1491]: time="2026-04-21T09:59:12.419865945Z" level=info msg="TearDown network for sandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\" successfully" Apr 21 09:59:12.424691 containerd[1491]: time="2026-04-21T09:59:12.424295281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:12.424691 containerd[1491]: time="2026-04-21T09:59:12.424443162Z" level=info msg="RemovePodSandbox \"63740b7a02edb9961117257acbaa7a46cbcefdfb6e1eacd0911f24ddd8d61a74\" returns successfully" Apr 21 09:59:12.426640 containerd[1491]: time="2026-04-21T09:59:12.426195608Z" level=info msg="StopPodSandbox for \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\"" Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.487 [WARNING][5525] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"5cf12a5d-710e-475b-9da9-806fe2f83ca0", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7", Pod:"goldmane-9f7667bb8-gc82f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd93cb06382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.487 [INFO][5525] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.488 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" iface="eth0" netns="" Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.488 [INFO][5525] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.488 [INFO][5525] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.519 [INFO][5532] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" HandleID="k8s-pod-network.7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.520 [INFO][5532] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.520 [INFO][5532] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.539 [WARNING][5532] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" HandleID="k8s-pod-network.7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.539 [INFO][5532] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" HandleID="k8s-pod-network.7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.544 [INFO][5532] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:12.554022 containerd[1491]: 2026-04-21 09:59:12.551 [INFO][5525] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:12.554022 containerd[1491]: time="2026-04-21T09:59:12.554009839Z" level=info msg="TearDown network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\" successfully" Apr 21 09:59:12.554682 containerd[1491]: time="2026-04-21T09:59:12.554038959Z" level=info msg="StopPodSandbox for \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\" returns successfully" Apr 21 09:59:12.555074 containerd[1491]: time="2026-04-21T09:59:12.554898042Z" level=info msg="RemovePodSandbox for \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\"" Apr 21 09:59:12.555336 containerd[1491]: time="2026-04-21T09:59:12.555122523Z" level=info msg="Forcibly stopping sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\"" Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.613 [WARNING][5548] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"5cf12a5d-710e-475b-9da9-806fe2f83ca0", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"72b856df1a47f0c97778f60ab0602462aecb5f207a636a1e8ac620f186f279e7", Pod:"goldmane-9f7667bb8-gc82f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd93cb06382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.613 [INFO][5548] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.613 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" iface="eth0" netns="" Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.613 [INFO][5548] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.613 [INFO][5548] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.644 [INFO][5556] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" HandleID="k8s-pod-network.7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.645 [INFO][5556] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.645 [INFO][5556] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.660 [WARNING][5556] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" HandleID="k8s-pod-network.7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.660 [INFO][5556] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" HandleID="k8s-pod-network.7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Workload="ci--4081--3--7--a--ee081c135b-k8s-goldmane--9f7667bb8--gc82f-eth0" Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.663 [INFO][5556] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:12.667330 containerd[1491]: 2026-04-21 09:59:12.664 [INFO][5548] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e" Apr 21 09:59:12.667797 containerd[1491]: time="2026-04-21T09:59:12.667513656Z" level=info msg="TearDown network for sandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\" successfully" Apr 21 09:59:12.672348 containerd[1491]: time="2026-04-21T09:59:12.672288474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:12.672505 containerd[1491]: time="2026-04-21T09:59:12.672401274Z" level=info msg="RemovePodSandbox \"7a642ce6ae785243f93578f4a77bad2654666934617a08be3eaeadcae5fc273e\" returns successfully" Apr 21 09:59:12.673027 containerd[1491]: time="2026-04-21T09:59:12.672998597Z" level=info msg="StopPodSandbox for \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\"" Apr 21 09:59:12.747409 kubelet[2531]: I0421 09:59:12.746328 2531 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.723 [WARNING][5571] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0", GenerateName:"calico-apiserver-5459f6b57d-", Namespace:"calico-system", SelfLink:"", UID:"f066e964-91d8-4473-bead-51ea9c76986c", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5459f6b57d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22", Pod:"calico-apiserver-5459f6b57d-px5v8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd063c0e382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.724 [INFO][5571] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.724 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" iface="eth0" netns="" Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.724 [INFO][5571] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.724 [INFO][5571] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.771 [INFO][5579] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" HandleID="k8s-pod-network.90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.776 [INFO][5579] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.777 [INFO][5579] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.795 [WARNING][5579] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" HandleID="k8s-pod-network.90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.795 [INFO][5579] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" HandleID="k8s-pod-network.90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.798 [INFO][5579] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:12.802817 containerd[1491]: 2026-04-21 09:59:12.800 [INFO][5571] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:12.804409 containerd[1491]: time="2026-04-21T09:59:12.803646277Z" level=info msg="TearDown network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\" successfully" Apr 21 09:59:12.804547 containerd[1491]: time="2026-04-21T09:59:12.804520560Z" level=info msg="StopPodSandbox for \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\" returns successfully" Apr 21 09:59:12.805323 containerd[1491]: time="2026-04-21T09:59:12.805290003Z" level=info msg="RemovePodSandbox for \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\"" Apr 21 09:59:12.805323 containerd[1491]: time="2026-04-21T09:59:12.805324923Z" level=info msg="Forcibly stopping sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\"" Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.867 [WARNING][5594] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0", GenerateName:"calico-apiserver-5459f6b57d-", Namespace:"calico-system", SelfLink:"", UID:"f066e964-91d8-4473-bead-51ea9c76986c", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5459f6b57d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"16f0438c438cc94711c07ff042abf30d5f7efce14140ffc8b99cf81745cf2e22", Pod:"calico-apiserver-5459f6b57d-px5v8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd063c0e382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.867 [INFO][5594] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.867 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" iface="eth0" netns="" Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.867 [INFO][5594] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.867 [INFO][5594] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.929 [INFO][5605] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" HandleID="k8s-pod-network.90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.930 [INFO][5605] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.930 [INFO][5605] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.940 [WARNING][5605] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" HandleID="k8s-pod-network.90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.940 [INFO][5605] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" HandleID="k8s-pod-network.90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Workload="ci--4081--3--7--a--ee081c135b-k8s-calico--apiserver--5459f6b57d--px5v8-eth0" Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.942 [INFO][5605] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:12.947247 containerd[1491]: 2026-04-21 09:59:12.944 [INFO][5594] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb" Apr 21 09:59:12.947782 containerd[1491]: time="2026-04-21T09:59:12.947307766Z" level=info msg="TearDown network for sandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\" successfully" Apr 21 09:59:12.952785 containerd[1491]: time="2026-04-21T09:59:12.952721546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:12.952941 containerd[1491]: time="2026-04-21T09:59:12.952807266Z" level=info msg="RemovePodSandbox \"90c553d90a3986f1be8f82ca8674e5caa61b64a0695a3ce4d512ae40b2506dbb\" returns successfully" Apr 21 09:59:12.953784 containerd[1491]: time="2026-04-21T09:59:12.953371508Z" level=info msg="StopPodSandbox for \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\"" Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.000 [WARNING][5622] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f14c36e9-166a-4256-9de6-cdefe0504d6e", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc", Pod:"coredns-7d764666f9-gbnk5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4c5a34c259", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.002 [INFO][5622] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.002 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" iface="eth0" netns="" Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.002 [INFO][5622] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.002 [INFO][5622] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.027 [INFO][5629] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" HandleID="k8s-pod-network.684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.027 [INFO][5629] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.027 [INFO][5629] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.039 [WARNING][5629] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" HandleID="k8s-pod-network.684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.039 [INFO][5629] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" HandleID="k8s-pod-network.684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.041 [INFO][5629] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:13.045902 containerd[1491]: 2026-04-21 09:59:13.044 [INFO][5622] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:13.048682 containerd[1491]: time="2026-04-21T09:59:13.048533167Z" level=info msg="TearDown network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\" successfully" Apr 21 09:59:13.048682 containerd[1491]: time="2026-04-21T09:59:13.048577247Z" level=info msg="StopPodSandbox for \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\" returns successfully" Apr 21 09:59:13.049120 containerd[1491]: time="2026-04-21T09:59:13.049094329Z" level=info msg="RemovePodSandbox for \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\"" Apr 21 09:59:13.049181 containerd[1491]: time="2026-04-21T09:59:13.049125369Z" level=info msg="Forcibly stopping sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\"" Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.098 [WARNING][5643] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f14c36e9-166a-4256-9de6-cdefe0504d6e", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 9, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-ee081c135b", ContainerID:"e0a140f2412d1560b230134029cb5586f50695c1e76a0c9b68bb53fbfcf051fc", Pod:"coredns-7d764666f9-gbnk5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4c5a34c259", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.098 [INFO][5643] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.098 [INFO][5643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" iface="eth0" netns="" Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.098 [INFO][5643] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.098 [INFO][5643] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.137 [INFO][5650] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" HandleID="k8s-pod-network.684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.138 [INFO][5650] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.139 [INFO][5650] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.150 [WARNING][5650] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" HandleID="k8s-pod-network.684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.151 [INFO][5650] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" HandleID="k8s-pod-network.684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Workload="ci--4081--3--7--a--ee081c135b-k8s-coredns--7d764666f9--gbnk5-eth0" Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.153 [INFO][5650] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 09:59:13.160216 containerd[1491]: 2026-04-21 09:59:13.155 [INFO][5643] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5" Apr 21 09:59:13.160711 containerd[1491]: time="2026-04-21T09:59:13.160281473Z" level=info msg="TearDown network for sandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\" successfully" Apr 21 09:59:13.165207 containerd[1491]: time="2026-04-21T09:59:13.164970609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 09:59:13.165207 containerd[1491]: time="2026-04-21T09:59:13.165180290Z" level=info msg="RemovePodSandbox \"684bbab4d02cdd5ae355fd13d9bbf94d7231b695faa1c2783e61b8377b343ee5\" returns successfully" Apr 21 09:59:14.112066 containerd[1491]: time="2026-04-21T09:59:14.111991932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:14.114411 containerd[1491]: time="2026-04-21T09:59:14.113627417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 21 09:59:14.114930 containerd[1491]: time="2026-04-21T09:59:14.114817341Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:14.118356 containerd[1491]: time="2026-04-21T09:59:14.118275552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:14.119608 containerd[1491]: time="2026-04-21T09:59:14.119314035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 3.326556067s" Apr 21 09:59:14.119608 containerd[1491]: time="2026-04-21T09:59:14.119352515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 21 09:59:14.123369 containerd[1491]: time="2026-04-21T09:59:14.121571842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 09:59:14.126699 containerd[1491]: time="2026-04-21T09:59:14.126659139Z" level=info msg="CreateContainer within sandbox \"35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 09:59:14.153262 containerd[1491]: time="2026-04-21T09:59:14.153205945Z" level=info msg="CreateContainer within sandbox \"35799bc7d409aa28bf7ecf1fddd0204763c166f2de901ae3860efad2987ec3d2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0120a6400937a7f22ed4a2dd140c4c6e8b5ecd703736d6cd7ca0f588ed42d529\"" Apr 21 09:59:14.154968 containerd[1491]: time="2026-04-21T09:59:14.154785270Z" level=info msg="StartContainer for \"0120a6400937a7f22ed4a2dd140c4c6e8b5ecd703736d6cd7ca0f588ed42d529\"" Apr 21 09:59:14.204719 systemd[1]: Started cri-containerd-0120a6400937a7f22ed4a2dd140c4c6e8b5ecd703736d6cd7ca0f588ed42d529.scope - libcontainer container 0120a6400937a7f22ed4a2dd140c4c6e8b5ecd703736d6cd7ca0f588ed42d529. Apr 21 09:59:14.239523 containerd[1491]: time="2026-04-21T09:59:14.239455024Z" level=info msg="StartContainer for \"0120a6400937a7f22ed4a2dd140c4c6e8b5ecd703736d6cd7ca0f588ed42d529\" returns successfully" Apr 21 09:59:14.433340 kubelet[2531]: I0421 09:59:14.432519 2531 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 09:59:14.433340 kubelet[2531]: I0421 09:59:14.432551 2531 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 09:59:14.525339 containerd[1491]: time="2026-04-21T09:59:14.525283068Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:14.526628 containerd[1491]: time="2026-04-21T09:59:14.526576112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 09:59:14.531490 containerd[1491]: time="2026-04-21T09:59:14.531410968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 409.762085ms" Apr 21 09:59:14.532193 containerd[1491]: time="2026-04-21T09:59:14.531466968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 21 09:59:14.534532 containerd[1491]: time="2026-04-21T09:59:14.534265857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 09:59:14.539438 containerd[1491]: time="2026-04-21T09:59:14.539345834Z" level=info msg="CreateContainer within sandbox \"5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 09:59:14.561543 containerd[1491]: time="2026-04-21T09:59:14.561493865Z" level=info msg="CreateContainer within sandbox \"5626109c46c4b0e09772da8fc17bb604cc4f6c3af025044f15c4737ace700cd3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f5a2676bed1d7a922d4e72e406a86f7876be906e7d9146579a37c8b6d2f9a5f\"" Apr 21 09:59:14.562881 containerd[1491]: time="2026-04-21T09:59:14.562695109Z" level=info msg="StartContainer for \"9f5a2676bed1d7a922d4e72e406a86f7876be906e7d9146579a37c8b6d2f9a5f\"" Apr 21 09:59:14.593601 systemd[1]: Started cri-containerd-9f5a2676bed1d7a922d4e72e406a86f7876be906e7d9146579a37c8b6d2f9a5f.scope - libcontainer container 9f5a2676bed1d7a922d4e72e406a86f7876be906e7d9146579a37c8b6d2f9a5f. Apr 21 09:59:14.630425 containerd[1491]: time="2026-04-21T09:59:14.630368008Z" level=info msg="StartContainer for \"9f5a2676bed1d7a922d4e72e406a86f7876be906e7d9146579a37c8b6d2f9a5f\" returns successfully" Apr 21 09:59:14.799580 kubelet[2531]: I0421 09:59:14.799499 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-h5t5f" podStartSLOduration=31.462697318 podStartE2EDuration="42.799479115s" podCreationTimestamp="2026-04-21 09:58:32 +0000 UTC" firstStartedPulling="2026-04-21 09:59:02.784627325 +0000 UTC m=+51.634758607" lastFinishedPulling="2026-04-21 09:59:14.121409002 +0000 UTC m=+62.971540404" observedRunningTime="2026-04-21 09:59:14.78881776 +0000 UTC m=+63.638949122" watchObservedRunningTime="2026-04-21 09:59:14.799479115 +0000 UTC m=+63.649610437" Apr 21 09:59:14.837959 kubelet[2531]: I0421 09:59:14.837774 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5459f6b57d-nbrdx" podStartSLOduration=36.346090449 podStartE2EDuration="44.837757159s" podCreationTimestamp="2026-04-21 09:58:30 +0000 UTC" firstStartedPulling="2026-04-21 09:59:06.041219783 +0000 UTC m=+54.891351105" lastFinishedPulling="2026-04-21 09:59:14.532886493 +0000 UTC m=+63.383017815" observedRunningTime="2026-04-21 09:59:14.837551198 +0000 UTC m=+63.687682520" watchObservedRunningTime="2026-04-21 09:59:14.837757159 +0000 UTC m=+63.687888521" Apr 21 09:59:15.637770 systemd[1]: Started sshd@9-178.104.214.66:22-50.85.169.122:46858.service - OpenSSH per-connection server daemon (50.85.169.122:46858). Apr 21 09:59:15.759556 sshd[5745]: Accepted publickey for core from 50.85.169.122 port 46858 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:15.762287 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:15.768567 systemd-logind[1464]: New session 10 of user core. Apr 21 09:59:15.774594 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 09:59:16.025663 sshd[5745]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:16.033285 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Apr 21 09:59:16.034131 systemd[1]: sshd@9-178.104.214.66:22-50.85.169.122:46858.service: Deactivated successfully. Apr 21 09:59:16.038477 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 09:59:16.045538 systemd-logind[1464]: Removed session 10. Apr 21 09:59:17.078190 containerd[1491]: time="2026-04-21T09:59:17.078127086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:17.080418 containerd[1491]: time="2026-04-21T09:59:17.079567889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 21 09:59:17.091047 containerd[1491]: time="2026-04-21T09:59:17.090958880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.556641103s" Apr 21 09:59:17.091275 containerd[1491]: time="2026-04-21T09:59:17.091250201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 21 09:59:17.091907 containerd[1491]: time="2026-04-21T09:59:17.091866882Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:17.095319 containerd[1491]: time="2026-04-21T09:59:17.095282171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 09:59:17.124215 containerd[1491]: time="2026-04-21T09:59:17.124166128Z" level=info msg="CreateContainer within sandbox \"f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 09:59:17.144758 containerd[1491]: time="2026-04-21T09:59:17.144696783Z" level=info msg="CreateContainer within sandbox \"f555c291fdf814b91d9d0e4b20e5833dfea54bc000091f2497c713faf1086ad1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3c9fec14b20ad824473f7d37d29c33f429caa24be2f2afb87934dbc35aaef696\"" Apr 21 09:59:17.146751 containerd[1491]: time="2026-04-21T09:59:17.146717628Z" level=info msg="StartContainer for \"3c9fec14b20ad824473f7d37d29c33f429caa24be2f2afb87934dbc35aaef696\"" Apr 21 09:59:17.189605 systemd[1]: Started cri-containerd-3c9fec14b20ad824473f7d37d29c33f429caa24be2f2afb87934dbc35aaef696.scope - libcontainer container 3c9fec14b20ad824473f7d37d29c33f429caa24be2f2afb87934dbc35aaef696. Apr 21 09:59:17.248278 containerd[1491]: time="2026-04-21T09:59:17.248205459Z" level=info msg="StartContainer for \"3c9fec14b20ad824473f7d37d29c33f429caa24be2f2afb87934dbc35aaef696\" returns successfully" Apr 21 09:59:17.876171 kubelet[2531]: I0421 09:59:17.876106 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77f77bf9b9-58b2r" podStartSLOduration=36.689885565 podStartE2EDuration="45.876088012s" podCreationTimestamp="2026-04-21 09:58:32 +0000 UTC" firstStartedPulling="2026-04-21 09:59:07.911670051 +0000 UTC m=+56.761801373" lastFinishedPulling="2026-04-21 09:59:17.097872498 +0000 UTC m=+65.948003820" observedRunningTime="2026-04-21 09:59:17.81172388 +0000 UTC m=+66.661855202" watchObservedRunningTime="2026-04-21 09:59:17.876088012 +0000 UTC m=+66.726219294" Apr 21 09:59:21.057688 systemd[1]: Started sshd@10-178.104.214.66:22-50.85.169.122:39998.service - OpenSSH per-connection server daemon (50.85.169.122:39998). Apr 21 09:59:21.188043 sshd[5865]: Accepted publickey for core from 50.85.169.122 port 39998 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:21.190734 sshd[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:21.198849 systemd-logind[1464]: New session 11 of user core. Apr 21 09:59:21.204742 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 09:59:21.395468 sshd[5865]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:21.404308 systemd[1]: sshd@10-178.104.214.66:22-50.85.169.122:39998.service: Deactivated successfully. Apr 21 09:59:21.410794 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 09:59:21.413096 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Apr 21 09:59:21.432696 systemd[1]: Started sshd@11-178.104.214.66:22-50.85.169.122:40004.service - OpenSSH per-connection server daemon (50.85.169.122:40004). Apr 21 09:59:21.435511 systemd-logind[1464]: Removed session 11. Apr 21 09:59:21.570022 sshd[5879]: Accepted publickey for core from 50.85.169.122 port 40004 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:21.573639 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:21.582272 systemd-logind[1464]: New session 12 of user core. Apr 21 09:59:21.585628 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 09:59:21.815622 sshd[5879]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:21.825633 systemd[1]: sshd@11-178.104.214.66:22-50.85.169.122:40004.service: Deactivated successfully. Apr 21 09:59:21.830111 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 09:59:21.832718 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Apr 21 09:59:21.858173 systemd[1]: Started sshd@12-178.104.214.66:22-50.85.169.122:40020.service - OpenSSH per-connection server daemon (50.85.169.122:40020). Apr 21 09:59:21.863952 systemd-logind[1464]: Removed session 12. Apr 21 09:59:21.995891 sshd[5891]: Accepted publickey for core from 50.85.169.122 port 40020 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:21.998564 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:22.006797 systemd-logind[1464]: New session 13 of user core. Apr 21 09:59:22.009609 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 09:59:22.199186 sshd[5891]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:22.204433 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Apr 21 09:59:22.204726 systemd[1]: sshd@12-178.104.214.66:22-50.85.169.122:40020.service: Deactivated successfully. Apr 21 09:59:22.207977 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 09:59:22.210480 systemd-logind[1464]: Removed session 13. Apr 21 09:59:23.733782 kubelet[2531]: I0421 09:59:23.733196 2531 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 09:59:27.230828 systemd[1]: Started sshd@13-178.104.214.66:22-50.85.169.122:40032.service - OpenSSH per-connection server daemon (50.85.169.122:40032). Apr 21 09:59:27.352039 sshd[5928]: Accepted publickey for core from 50.85.169.122 port 40032 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:27.356791 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:27.364719 systemd-logind[1464]: New session 14 of user core. Apr 21 09:59:27.371577 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 09:59:27.557314 sshd[5928]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:27.563879 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Apr 21 09:59:27.564061 systemd[1]: sshd@13-178.104.214.66:22-50.85.169.122:40032.service: Deactivated successfully. Apr 21 09:59:27.567348 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 09:59:27.568964 systemd-logind[1464]: Removed session 14. Apr 21 09:59:32.593151 systemd[1]: Started sshd@14-178.104.214.66:22-50.85.169.122:41612.service - OpenSSH per-connection server daemon (50.85.169.122:41612). Apr 21 09:59:32.731355 sshd[5951]: Accepted publickey for core from 50.85.169.122 port 41612 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:32.733018 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:32.739444 systemd-logind[1464]: New session 15 of user core. Apr 21 09:59:32.747648 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 09:59:32.931621 sshd[5951]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:32.936107 systemd[1]: sshd@14-178.104.214.66:22-50.85.169.122:41612.service: Deactivated successfully. Apr 21 09:59:32.938870 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 09:59:32.940709 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Apr 21 09:59:32.942646 systemd-logind[1464]: Removed session 15. Apr 21 09:59:32.963410 systemd[1]: Started sshd@15-178.104.214.66:22-50.85.169.122:41614.service - OpenSSH per-connection server daemon (50.85.169.122:41614). Apr 21 09:59:33.095459 sshd[5965]: Accepted publickey for core from 50.85.169.122 port 41614 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:33.097329 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:33.103312 systemd-logind[1464]: New session 16 of user core. Apr 21 09:59:33.111738 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 09:59:33.456588 sshd[5965]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:33.462064 systemd[1]: sshd@15-178.104.214.66:22-50.85.169.122:41614.service: Deactivated successfully. Apr 21 09:59:33.466284 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 09:59:33.468272 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Apr 21 09:59:33.476092 systemd[1]: Started sshd@16-178.104.214.66:22-50.85.169.122:41628.service - OpenSSH per-connection server daemon (50.85.169.122:41628). Apr 21 09:59:33.477827 systemd-logind[1464]: Removed session 16. Apr 21 09:59:33.612831 sshd[5976]: Accepted publickey for core from 50.85.169.122 port 41628 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:33.614581 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:33.622551 systemd-logind[1464]: New session 17 of user core. Apr 21 09:59:33.632704 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 09:59:34.468982 sshd[5976]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:34.477868 systemd[1]: sshd@16-178.104.214.66:22-50.85.169.122:41628.service: Deactivated successfully. Apr 21 09:59:34.480374 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 09:59:34.483484 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Apr 21 09:59:34.503887 systemd[1]: Started sshd@17-178.104.214.66:22-50.85.169.122:41630.service - OpenSSH per-connection server daemon (50.85.169.122:41630). Apr 21 09:59:34.504859 systemd-logind[1464]: Removed session 17. Apr 21 09:59:34.631442 sshd[5999]: Accepted publickey for core from 50.85.169.122 port 41630 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:34.633491 sshd[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:34.638472 systemd-logind[1464]: New session 18 of user core. Apr 21 09:59:34.641574 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 09:59:34.951138 sshd[5999]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:34.958612 systemd[1]: sshd@17-178.104.214.66:22-50.85.169.122:41630.service: Deactivated successfully. Apr 21 09:59:34.966441 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 09:59:34.969740 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Apr 21 09:59:34.986894 systemd[1]: Started sshd@18-178.104.214.66:22-50.85.169.122:41640.service - OpenSSH per-connection server daemon (50.85.169.122:41640). Apr 21 09:59:34.988357 systemd-logind[1464]: Removed session 18. Apr 21 09:59:35.106833 sshd[6021]: Accepted publickey for core from 50.85.169.122 port 41640 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:35.108144 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:35.114100 systemd-logind[1464]: New session 19 of user core. Apr 21 09:59:35.120832 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 09:59:35.300823 sshd[6021]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:35.308791 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 09:59:35.309720 systemd[1]: sshd@18-178.104.214.66:22-50.85.169.122:41640.service: Deactivated successfully. Apr 21 09:59:35.313155 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Apr 21 09:59:35.314959 systemd-logind[1464]: Removed session 19. Apr 21 09:59:40.334176 systemd[1]: Started sshd@19-178.104.214.66:22-50.85.169.122:48036.service - OpenSSH per-connection server daemon (50.85.169.122:48036). Apr 21 09:59:40.457244 sshd[6038]: Accepted publickey for core from 50.85.169.122 port 48036 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:40.458189 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:40.464420 systemd-logind[1464]: New session 20 of user core. Apr 21 09:59:40.470611 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 09:59:40.646183 sshd[6038]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:40.653510 systemd[1]: sshd@19-178.104.214.66:22-50.85.169.122:48036.service: Deactivated successfully. Apr 21 09:59:40.658330 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 09:59:40.659493 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Apr 21 09:59:40.661235 systemd-logind[1464]: Removed session 20. Apr 21 09:59:45.683864 systemd[1]: Started sshd@20-178.104.214.66:22-50.85.169.122:48050.service - OpenSSH per-connection server daemon (50.85.169.122:48050). Apr 21 09:59:45.808480 sshd[6072]: Accepted publickey for core from 50.85.169.122 port 48050 ssh2: RSA SHA256:H2GDHYMb+1VDhh8fYRULGIeGI6zEpuvWNbrKKWv7l+g Apr 21 09:59:45.810336 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 09:59:45.816112 systemd-logind[1464]: New session 21 of user core. Apr 21 09:59:45.821921 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 09:59:45.998818 sshd[6072]: pam_unix(sshd:session): session closed for user core Apr 21 09:59:46.005256 systemd[1]: sshd@20-178.104.214.66:22-50.85.169.122:48050.service: Deactivated successfully. Apr 21 09:59:46.007329 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 09:59:46.008605 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. Apr 21 09:59:46.010212 systemd-logind[1464]: Removed session 21.