Apr 21 10:11:52.966423 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:11:52.966441 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:11:52.966451 kernel: BIOS-provided physical RAM map: Apr 21 10:11:52.966455 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:11:52.966460 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 21 10:11:52.966464 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 21 10:11:52.966469 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 21 10:11:52.966474 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Apr 21 10:11:52.966478 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Apr 21 10:11:52.966483 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Apr 21 10:11:52.966487 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 21 10:11:52.966494 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 21 10:11:52.966498 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 21 10:11:52.966503 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 21 10:11:52.966509 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 21 10:11:52.966513 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:11:52.966520 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 21 10:11:52.966525 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 21 10:11:52.966530 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:11:52.966534 kernel: NX (Execute Disable) protection: active Apr 21 10:11:52.966539 kernel: APIC: Static calls initialized Apr 21 10:11:52.966544 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 21 10:11:52.966549 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e015018 Apr 21 10:11:52.966553 kernel: efi: Remove mem136: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 21 10:11:52.966558 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 21 10:11:52.966563 kernel: SMBIOS 3.0.0 present. Apr 21 10:11:52.966568 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 21 10:11:52.966572 kernel: Hypervisor detected: KVM Apr 21 10:11:52.966579 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:11:52.966584 kernel: kvm-clock: using sched offset of 12534546040 cycles Apr 21 10:11:52.966589 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:11:52.966594 kernel: tsc: Detected 2399.998 MHz processor Apr 21 10:11:52.966599 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:11:52.966604 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:11:52.966609 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 21 10:11:52.966614 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:11:52.966618 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:11:52.966626 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 21 10:11:52.966630 kernel: Using GB pages for direct mapping Apr 21 10:11:52.966635 kernel: Secure boot disabled Apr 21 10:11:52.966643 kernel: ACPI: Early table checksum verification disabled Apr 21 10:11:52.966648 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 21 10:11:52.966653 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 10:11:52.966658 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:52.966665 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:52.966670 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 21 10:11:52.966675 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:52.966681 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:52.966685 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:52.966691 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:52.966696 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 10:11:52.966703 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 21 10:11:52.966708 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 21 10:11:52.966713 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 21 10:11:52.966718 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 21 10:11:52.966723 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 21 10:11:52.966728 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 21 10:11:52.966733 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 21 10:11:52.966738 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 21 10:11:52.966743 kernel: No NUMA configuration found Apr 21 10:11:52.966750 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 21 10:11:52.966755 kernel: NODE_DATA(0) allocated [mem 0x179ff8000-0x179ffdfff] Apr 21 10:11:52.966760 kernel: Zone ranges: Apr 21 10:11:52.966766 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:11:52.966770 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:11:52.966775 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 21 10:11:52.966780 kernel: Movable zone start for each node Apr 21 10:11:52.966785 kernel: Early memory node ranges Apr 21 10:11:52.966791 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:11:52.966795 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 21 10:11:52.966803 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 21 10:11:52.966808 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 21 10:11:52.966813 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 21 10:11:52.966818 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 21 10:11:52.966823 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:11:52.966828 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:11:52.966833 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 21 10:11:52.966838 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 21 10:11:52.966843 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 21 10:11:52.966850 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 21 10:11:52.966855 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:11:52.966860 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:11:52.966865 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:11:52.966870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:11:52.966875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:11:52.966880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:11:52.966885 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:11:52.966890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:11:52.966897 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:11:52.966928 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:11:52.966933 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:11:52.966938 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:11:52.966943 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 21 10:11:52.966948 kernel: Booting paravirtualized kernel on KVM Apr 21 10:11:52.966954 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:11:52.966959 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:11:52.966964 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:11:52.966972 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:11:52.966977 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:11:52.966982 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 21 10:11:52.966987 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:11:52.966993 kernel: random: crng init done Apr 21 10:11:52.966998 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:11:52.967003 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:11:52.967008 kernel: Fallback order for Node 0: 0 Apr 21 10:11:52.967013 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Apr 21 10:11:52.967020 kernel: Policy zone: Normal Apr 21 10:11:52.967025 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:11:52.967030 kernel: software IO TLB: area num 2. Apr 21 10:11:52.967035 kernel: Memory: 3819168K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 271796K reserved, 0K cma-reserved) Apr 21 10:11:52.967041 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:11:52.967046 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:11:52.967051 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:11:52.967056 kernel: Dynamic Preempt: voluntary Apr 21 10:11:52.967061 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:11:52.967069 kernel: rcu: RCU event tracing is enabled. Apr 21 10:11:52.967074 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:11:52.967079 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:11:52.967091 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:11:52.967099 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:11:52.967104 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:11:52.967109 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:11:52.967115 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:11:52.967120 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:11:52.967125 kernel: Console: colour dummy device 80x25 Apr 21 10:11:52.967130 kernel: printk: console [tty0] enabled Apr 21 10:11:52.967138 kernel: printk: console [ttyS0] enabled Apr 21 10:11:52.967143 kernel: ACPI: Core revision 20230628 Apr 21 10:11:52.967149 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:11:52.967154 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:11:52.967159 kernel: x2apic enabled Apr 21 10:11:52.967164 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:11:52.967172 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:11:52.967177 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:11:52.967183 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Apr 21 10:11:52.967188 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:11:52.967193 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:11:52.967199 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:11:52.967207 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:11:52.967212 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 21 10:11:52.967220 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:11:52.967225 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:11:52.967230 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:11:52.967236 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 21 10:11:52.967241 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:11:52.967246 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:11:52.967252 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:11:52.967257 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:11:52.967262 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:11:52.967270 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:11:52.967275 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:11:52.967280 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:11:52.967285 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:11:52.967291 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:11:52.967296 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:11:52.967312 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:11:52.967318 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:11:52.967324 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 21 10:11:52.967331 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 21 10:11:52.967337 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:11:52.967342 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:11:52.967347 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:11:52.967353 kernel: landlock: Up and running. Apr 21 10:11:52.967358 kernel: SELinux: Initializing. Apr 21 10:11:52.967363 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:11:52.967369 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:11:52.967374 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 21 10:11:52.967382 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:11:52.967387 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:11:52.967393 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:11:52.967398 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:11:52.967403 kernel: ... version: 0 Apr 21 10:11:52.967408 kernel: ... bit width: 48 Apr 21 10:11:52.967414 kernel: ... generic registers: 6 Apr 21 10:11:52.967419 kernel: ... value mask: 0000ffffffffffff Apr 21 10:11:52.967424 kernel: ... max period: 00007fffffffffff Apr 21 10:11:52.967432 kernel: ... fixed-purpose events: 0 Apr 21 10:11:52.967437 kernel: ... event mask: 000000000000003f Apr 21 10:11:52.967442 kernel: signal: max sigframe size: 3376 Apr 21 10:11:52.967447 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:11:52.967453 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:11:52.967458 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:11:52.967463 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:11:52.967469 kernel: .... node #0, CPUs: #1 Apr 21 10:11:52.967474 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:11:52.967481 kernel: smpboot: Max logical packages: 1 Apr 21 10:11:52.967487 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Apr 21 10:11:52.967492 kernel: devtmpfs: initialized Apr 21 10:11:52.967497 kernel: x86/mm: Memory block size: 128MB Apr 21 10:11:52.967503 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 21 10:11:52.967508 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:11:52.967514 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:11:52.967519 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:11:52.967524 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:11:52.967532 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:11:52.967537 kernel: audit: type=2000 audit(1776766312.218:1): state=initialized audit_enabled=0 res=1 Apr 21 10:11:52.967542 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:11:52.967547 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:11:52.967553 kernel: cpuidle: using governor menu Apr 21 10:11:52.967558 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:11:52.967563 kernel: dca service started, version 1.12.1 Apr 21 10:11:52.967569 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 21 10:11:52.967574 kernel: PCI: Using configuration type 1 for base access Apr 21 10:11:52.967582 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:11:52.967587 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:11:52.967592 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:11:52.967598 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:11:52.967603 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:11:52.967608 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:11:52.967613 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:11:52.967618 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:11:52.967624 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:11:52.967631 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:11:52.967637 kernel: ACPI: Interpreter enabled Apr 21 10:11:52.967642 kernel: ACPI: PM: (supports S0 S5) Apr 21 10:11:52.967647 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:11:52.967652 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:11:52.967658 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:11:52.967663 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:11:52.967668 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:11:52.967825 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:11:52.968069 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:11:52.968172 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:11:52.968179 kernel: PCI host bridge to bus 0000:00 Apr 21 10:11:52.968281 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:11:52.968384 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:11:52.968475 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:11:52.968570 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 21 10:11:52.968661 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 21 10:11:52.968750 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 21 10:11:52.968841 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:11:52.968975 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:11:52.969083 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 21 10:11:52.969183 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Apr 21 10:11:52.969286 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Apr 21 10:11:52.969394 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Apr 21 10:11:52.969493 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:11:52.969592 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 21 10:11:52.969691 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:11:52.969796 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:52.969897 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Apr 21 10:11:52.970013 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:52.970111 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Apr 21 10:11:52.970215 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:52.970322 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Apr 21 10:11:52.970428 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:52.970529 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Apr 21 10:11:52.970631 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:52.970728 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Apr 21 10:11:52.970834 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:52.970944 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Apr 21 10:11:52.971048 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:52.971145 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Apr 21 10:11:52.971252 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:52.971361 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Apr 21 10:11:52.971465 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:52.971563 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Apr 21 10:11:52.971666 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:11:52.971762 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:11:52.971867 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:11:52.971984 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Apr 21 10:11:52.972082 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Apr 21 10:11:52.972185 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:11:52.972282 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Apr 21 10:11:52.972402 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 10:11:52.972510 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Apr 21 10:11:52.972611 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Apr 21 10:11:52.972712 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 10:11:52.972820 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 10:11:52.972937 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 21 10:11:52.973036 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:11:52.973143 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 21 10:11:52.973247 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Apr 21 10:11:52.973356 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 10:11:52.973455 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 21 10:11:52.973567 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 21 10:11:52.973670 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Apr 21 10:11:52.973770 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Apr 21 10:11:52.973873 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 10:11:52.974061 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 21 10:11:52.974157 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:11:52.974262 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 21 10:11:52.974372 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Apr 21 10:11:52.974468 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 10:11:52.974562 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:11:52.974666 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 21 10:11:52.974770 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Apr 21 10:11:52.974870 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Apr 21 10:11:52.974988 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 10:11:52.975088 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 21 10:11:52.975187 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:11:52.975293 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 21 10:11:52.975404 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Apr 21 10:11:52.975508 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Apr 21 10:11:52.975603 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 10:11:52.975697 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 21 10:11:52.975791 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:11:52.975797 kernel: acpiphp: Slot [0] registered Apr 21 10:11:52.975916 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 10:11:52.976017 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Apr 21 10:11:52.976117 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Apr 21 10:11:52.976219 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 10:11:52.976324 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 10:11:52.976419 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 21 10:11:52.976513 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:11:52.976520 kernel: acpiphp: Slot [0-2] registered Apr 21 10:11:52.976614 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 10:11:52.976711 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 21 10:11:52.976804 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:11:52.976814 kernel: acpiphp: Slot [0-3] registered Apr 21 10:11:52.976918 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 10:11:52.977014 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 21 10:11:52.977108 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:11:52.977115 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:11:52.977120 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:11:52.977126 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:11:52.977132 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:11:52.977140 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:11:52.977145 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:11:52.977150 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:11:52.977156 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:11:52.977161 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:11:52.977167 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:11:52.977172 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:11:52.977177 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:11:52.977182 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:11:52.977190 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:11:52.977195 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:11:52.977201 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:11:52.977206 kernel: iommu: Default domain type: Translated Apr 21 10:11:52.977212 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:11:52.977217 kernel: efivars: Registered efivars operations Apr 21 10:11:52.977222 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:11:52.977228 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:11:52.977233 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 21 10:11:52.977241 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 21 10:11:52.977246 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 21 10:11:52.977251 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 21 10:11:52.977357 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:11:52.977452 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:11:52.977546 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:11:52.977553 kernel: vgaarb: loaded Apr 21 10:11:52.977558 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:11:52.977563 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:11:52.977573 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:11:52.977579 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:11:52.977585 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:11:52.977590 kernel: pnp: PnP ACPI init Apr 21 10:11:52.977694 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 21 10:11:52.977701 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:11:52.977706 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:11:52.977712 kernel: NET: Registered PF_INET protocol family Apr 21 10:11:52.977732 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:11:52.977740 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:11:52.977746 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:11:52.977752 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:11:52.977757 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:11:52.977763 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:11:52.977768 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:11:52.977774 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:11:52.977780 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:11:52.977788 kernel: NET: Registered PF_XDP protocol family Apr 21 10:11:52.977891 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 21 10:11:52.978018 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 21 10:11:52.978116 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 21 10:11:52.978212 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 21 10:11:52.978321 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 21 10:11:52.978417 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 21 10:11:52.979042 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 21 10:11:52.979154 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 21 10:11:52.979257 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Apr 21 10:11:52.979369 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 10:11:52.979469 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 21 10:11:52.979564 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:11:52.979660 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 10:11:52.979757 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 21 10:11:52.979853 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 10:11:52.980008 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 21 10:11:52.980104 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:11:52.980199 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 10:11:52.980294 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:11:52.980403 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 10:11:52.980498 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 21 10:11:52.980593 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:11:52.980688 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 10:11:52.980784 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 21 10:11:52.980881 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:11:52.981020 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Apr 21 10:11:52.981117 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 10:11:52.981219 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 21 10:11:52.981325 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 21 10:11:52.981420 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:11:52.981514 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 10:11:52.981610 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 21 10:11:52.981708 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 21 10:11:52.981802 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:11:52.981897 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 10:11:52.982036 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 21 10:11:52.982136 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 21 10:11:52.982232 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:11:52.982334 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:11:52.982422 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:11:52.982516 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:11:52.982604 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 21 10:11:52.982690 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 21 10:11:52.982777 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 21 10:11:52.982877 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 21 10:11:52.983008 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:11:52.983110 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 21 10:11:52.983213 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 21 10:11:52.983316 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:11:52.983416 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:11:52.983517 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 21 10:11:52.983610 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:11:52.983708 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 21 10:11:52.983803 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:11:52.983913 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 21 10:11:52.984007 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 21 10:11:52.984100 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:11:52.984199 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 21 10:11:52.984293 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 21 10:11:52.984395 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:11:52.984500 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 21 10:11:52.984592 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 21 10:11:52.984683 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:11:52.984690 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:11:52.984698 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:11:52.984704 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:11:52.984709 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 21 10:11:52.984715 kernel: Initialise system trusted keyrings Apr 21 10:11:52.984723 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:11:52.984729 kernel: Key type asymmetric registered Apr 21 10:11:52.984734 kernel: Asymmetric key parser 'x509' registered Apr 21 10:11:52.984740 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:11:52.984746 kernel: io scheduler mq-deadline registered Apr 21 10:11:52.984751 kernel: io scheduler kyber registered Apr 21 10:11:52.984757 kernel: io scheduler bfq registered Apr 21 10:11:52.984854 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 21 10:11:52.984963 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 21 10:11:52.985063 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 21 10:11:52.985158 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 21 10:11:52.985253 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 21 10:11:52.985358 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 21 10:11:52.985455 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 21 10:11:52.985550 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 21 10:11:52.985645 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 21 10:11:52.985739 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 21 10:11:52.985836 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 21 10:11:52.985951 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 21 10:11:52.986046 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 21 10:11:52.986141 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 21 10:11:52.986236 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 21 10:11:52.986341 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 21 10:11:52.986347 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:11:52.986460 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 21 10:11:52.986565 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 21 10:11:52.986572 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:11:52.986578 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 21 10:11:52.986584 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:11:52.986590 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:11:52.986595 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:11:52.986603 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:11:52.986609 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:11:52.986711 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:11:52.986721 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:11:52.986812 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:11:52.986919 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:11:52 UTC (1776766312) Apr 21 10:11:52.987011 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:11:52.987017 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:11:52.987023 kernel: efifb: probing for efifb Apr 21 10:11:52.987028 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Apr 21 10:11:52.987034 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 21 10:11:52.987043 kernel: efifb: scrolling: redraw Apr 21 10:11:52.987048 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:11:52.987054 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 10:11:52.987059 kernel: fb0: EFI VGA frame buffer device Apr 21 10:11:52.987065 kernel: pstore: Using crash dump compression: deflate Apr 21 10:11:52.987070 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:11:52.987076 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:11:52.987081 kernel: Segment Routing with IPv6 Apr 21 10:11:52.987087 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:11:52.987095 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:11:52.987100 kernel: Key type dns_resolver registered Apr 21 10:11:52.987106 kernel: IPI shorthand broadcast: enabled Apr 21 10:11:52.987111 kernel: sched_clock: Marking stable (1418011596, 215910882)->(1692748275, -58825797) Apr 21 10:11:52.987117 kernel: registered taskstats version 1 Apr 21 10:11:52.987123 kernel: Loading compiled-in X.509 certificates Apr 21 10:11:52.987128 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:11:52.987134 kernel: Key type .fscrypt registered Apr 21 10:11:52.987139 kernel: Key type fscrypt-provisioning registered Apr 21 10:11:52.987147 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:11:52.987152 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:11:52.987158 kernel: ima: No architecture policies found Apr 21 10:11:52.987163 kernel: clk: Disabling unused clocks Apr 21 10:11:52.987169 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:11:52.987174 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:11:52.987180 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:11:52.987185 kernel: Run /init as init process Apr 21 10:11:52.987191 kernel: with arguments: Apr 21 10:11:52.987199 kernel: /init Apr 21 10:11:52.987204 kernel: with environment: Apr 21 10:11:52.987209 kernel: HOME=/ Apr 21 10:11:52.987215 kernel: TERM=linux Apr 21 10:11:52.987222 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:11:52.987229 systemd[1]: Detected virtualization kvm. Apr 21 10:11:52.987235 systemd[1]: Detected architecture x86-64. Apr 21 10:11:52.987243 systemd[1]: Running in initrd. Apr 21 10:11:52.987249 systemd[1]: No hostname configured, using default hostname. Apr 21 10:11:52.987255 systemd[1]: Hostname set to . Apr 21 10:11:52.987261 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:11:52.987266 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:11:52.987272 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:11:52.987278 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:11:52.987285 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:11:52.987293 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:11:52.987307 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:11:52.987313 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:11:52.987319 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:11:52.987326 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:11:52.987331 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:11:52.987339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:11:52.987347 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:11:52.987353 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:11:52.987359 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:11:52.987365 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:11:52.987370 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:11:52.987376 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:11:52.987382 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:11:52.987388 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:11:52.987396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:11:52.987402 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:11:52.987408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:11:52.987414 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:11:52.987420 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:11:52.987426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:11:52.987431 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:11:52.987437 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:11:52.987443 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:11:52.987469 systemd-journald[188]: Collecting audit messages is disabled. Apr 21 10:11:52.987483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:11:52.987489 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:52.987496 systemd-journald[188]: Journal started Apr 21 10:11:52.987511 systemd-journald[188]: Runtime Journal (/run/log/journal/27c9a5df54734c42b89331062ed22f68) is 8.0M, max 76.3M, 68.3M free. Apr 21 10:11:52.993941 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:11:52.994463 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:11:52.995588 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:11:52.996404 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:11:53.005125 systemd-modules-load[189]: Inserted module 'overlay' Apr 21 10:11:53.009110 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:11:53.012020 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:11:53.012603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:53.022055 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:11:53.025003 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:11:53.031930 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:11:53.034139 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:11:53.035980 kernel: Bridge firewalling registered Apr 21 10:11:53.034415 systemd-modules-load[189]: Inserted module 'br_netfilter' Apr 21 10:11:53.037051 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:11:53.037931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:11:53.048107 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:11:53.049764 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:11:53.055032 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:11:53.057550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:11:53.059091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:11:53.064036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:11:53.067285 dracut-cmdline[220]: dracut-dracut-053 Apr 21 10:11:53.070944 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:11:53.097029 systemd-resolved[226]: Positive Trust Anchors: Apr 21 10:11:53.097044 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:11:53.097067 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:11:53.101338 systemd-resolved[226]: Defaulting to hostname 'linux'. Apr 21 10:11:53.102367 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:11:53.103806 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:11:53.143931 kernel: SCSI subsystem initialized Apr 21 10:11:53.152009 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:11:53.160939 kernel: iscsi: registered transport (tcp) Apr 21 10:11:53.178430 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:11:53.178485 kernel: QLogic iSCSI HBA Driver Apr 21 10:11:53.220658 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:11:53.227060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:11:53.248021 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:11:53.248075 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:11:53.251692 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:11:53.289955 kernel: raid6: avx512x4 gen() 48421 MB/s Apr 21 10:11:53.308039 kernel: raid6: avx512x2 gen() 51150 MB/s Apr 21 10:11:53.325982 kernel: raid6: avx512x1 gen() 33543 MB/s Apr 21 10:11:53.343945 kernel: raid6: avx2x4 gen() 52810 MB/s Apr 21 10:11:53.361934 kernel: raid6: avx2x2 gen() 55678 MB/s Apr 21 10:11:53.380962 kernel: raid6: avx2x1 gen() 43772 MB/s Apr 21 10:11:53.381047 kernel: raid6: using algorithm avx2x2 gen() 55678 MB/s Apr 21 10:11:53.400994 kernel: raid6: .... xor() 37313 MB/s, rmw enabled Apr 21 10:11:53.401044 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:11:53.417983 kernel: xor: automatically using best checksumming function avx Apr 21 10:11:53.527956 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:11:53.545285 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:11:53.552184 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:11:53.562439 systemd-udevd[407]: Using default interface naming scheme 'v255'. Apr 21 10:11:53.566497 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:11:53.577496 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:11:53.588620 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Apr 21 10:11:53.620659 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:11:53.626096 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:11:53.698341 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:11:53.703587 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:11:53.716740 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:11:53.719818 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:11:53.720955 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:11:53.721273 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:11:53.726033 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:11:53.745414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:11:53.782941 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:11:53.799928 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:11:53.800006 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:11:53.801065 kernel: AES CTR mode by8 optimization enabled Apr 21 10:11:53.818936 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:11:53.822952 kernel: libata version 3.00 loaded. Apr 21 10:11:53.831638 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:11:53.831754 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:11:53.836704 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:11:53.837078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:11:53.837249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:53.837641 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:53.846877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:53.857179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:11:53.857301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:53.863943 kernel: ACPI: bus type USB registered Apr 21 10:11:53.867223 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:11:53.867447 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:11:53.870259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:53.874427 kernel: usbcore: registered new interface driver usbfs Apr 21 10:11:53.880734 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:11:53.882160 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:11:53.882355 kernel: usbcore: registered new interface driver hub Apr 21 10:11:53.888933 kernel: scsi host1: ahci Apr 21 10:11:53.892582 kernel: usbcore: registered new device driver usb Apr 21 10:11:53.892623 kernel: scsi host2: ahci Apr 21 10:11:53.894439 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:53.898460 kernel: scsi host3: ahci Apr 21 10:11:53.907020 kernel: scsi host4: ahci Apr 21 10:11:53.907289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:11:53.911925 kernel: scsi host5: ahci Apr 21 10:11:53.916677 kernel: scsi host6: ahci Apr 21 10:11:53.929987 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Apr 21 10:11:53.930053 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Apr 21 10:11:53.930067 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Apr 21 10:11:53.943986 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Apr 21 10:11:53.944008 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Apr 21 10:11:53.944016 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 10:11:53.944203 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Apr 21 10:11:53.944211 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 21 10:11:53.944337 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:11:53.944481 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 21 10:11:53.944597 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 21 10:11:53.944724 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 10:11:53.944836 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 21 10:11:53.944967 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 21 10:11:53.950082 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:11:53.952981 kernel: hub 1-0:1.0: USB hub found Apr 21 10:11:53.953328 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:11:53.953505 kernel: hub 1-0:1.0: 4 ports detected Apr 21 10:11:53.956832 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:11:53.957119 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 21 10:11:53.962236 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:11:53.962266 kernel: GPT:17805311 != 160006143 Apr 21 10:11:53.963879 kernel: hub 2-0:1.0: USB hub found Apr 21 10:11:53.964073 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:11:53.964443 kernel: hub 2-0:1.0: 4 ports detected Apr 21 10:11:53.964569 kernel: GPT:17805311 != 160006143 Apr 21 10:11:53.970133 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:11:53.973270 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:11:53.976620 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:11:53.977429 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:11:54.198022 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 21 10:11:54.255096 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:11:54.255194 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:11:54.255207 kernel: ata1.00: applying bridge limits Apr 21 10:11:54.256951 kernel: ata1.00: configured for UDMA/100 Apr 21 10:11:54.259070 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:54.259948 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:11:54.262308 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:54.265794 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:54.268009 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:54.269939 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:54.311945 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:11:54.312234 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:11:54.316933 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (456) Apr 21 10:11:54.323930 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (467) Apr 21 10:11:54.330673 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:11:54.333945 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:11:54.337570 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:11:54.344068 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 10:11:54.344630 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:11:54.357821 kernel: usbcore: registered new interface driver usbhid Apr 21 10:11:54.357846 kernel: usbhid: USB HID core driver Apr 21 10:11:54.357857 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 21 10:11:54.357878 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 21 10:11:54.346449 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:11:54.362253 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:11:54.368105 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:11:54.379010 disk-uuid[591]: Primary Header is updated. Apr 21 10:11:54.379010 disk-uuid[591]: Secondary Entries is updated. Apr 21 10:11:54.379010 disk-uuid[591]: Secondary Header is updated. Apr 21 10:11:54.384938 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:11:54.391955 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:11:54.397917 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:11:55.404014 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:11:55.406256 disk-uuid[592]: The operation has completed successfully. Apr 21 10:11:55.474736 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:11:55.474839 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:11:55.496073 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:11:55.499198 sh[612]: Success Apr 21 10:11:55.511928 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:11:55.554220 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:11:55.566036 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:11:55.567550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:11:55.588751 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:11:55.588827 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:11:55.588841 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:11:55.593240 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:11:55.593299 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:11:55.602958 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:11:55.604660 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:11:55.607218 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:11:55.613127 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:11:55.617021 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:11:55.634638 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:55.634676 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:11:55.634685 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:11:55.643440 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:11:55.643478 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:11:55.653992 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:11:55.657969 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:55.664840 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:11:55.674053 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:11:55.734043 ignition[725]: Ignition 2.19.0 Apr 21 10:11:55.734050 ignition[725]: Stage: fetch-offline Apr 21 10:11:55.734086 ignition[725]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:55.734094 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:55.734169 ignition[725]: parsed url from cmdline: "" Apr 21 10:11:55.734173 ignition[725]: no config URL provided Apr 21 10:11:55.734177 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:11:55.736058 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:11:55.734185 ignition[725]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:11:55.737448 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:11:55.734189 ignition[725]: failed to fetch config: resource requires networking Apr 21 10:11:55.734376 ignition[725]: Ignition finished successfully Apr 21 10:11:55.744047 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:11:55.761058 systemd-networkd[798]: lo: Link UP Apr 21 10:11:55.761067 systemd-networkd[798]: lo: Gained carrier Apr 21 10:11:55.763491 systemd-networkd[798]: Enumeration completed Apr 21 10:11:55.763561 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:11:55.764366 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:55.764370 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:11:55.765275 systemd[1]: Reached target network.target - Network. Apr 21 10:11:55.765403 systemd-networkd[798]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:55.765408 systemd-networkd[798]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:11:55.766107 systemd-networkd[798]: eth0: Link UP Apr 21 10:11:55.766112 systemd-networkd[798]: eth0: Gained carrier Apr 21 10:11:55.766120 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:55.770169 systemd-networkd[798]: eth1: Link UP Apr 21 10:11:55.770173 systemd-networkd[798]: eth1: Gained carrier Apr 21 10:11:55.770181 systemd-networkd[798]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:55.772058 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:11:55.782406 ignition[801]: Ignition 2.19.0 Apr 21 10:11:55.782416 ignition[801]: Stage: fetch Apr 21 10:11:55.782525 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:55.782533 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:55.782591 ignition[801]: parsed url from cmdline: "" Apr 21 10:11:55.782594 ignition[801]: no config URL provided Apr 21 10:11:55.782599 ignition[801]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:11:55.782606 ignition[801]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:11:55.782619 ignition[801]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 21 10:11:55.782742 ignition[801]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:11:55.807965 systemd-networkd[798]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 10:11:55.828945 systemd-networkd[798]: eth0: DHCPv4 address 46.62.167.148/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 10:11:55.982978 ignition[801]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 21 10:11:55.990722 ignition[801]: GET result: OK Apr 21 10:11:55.990837 ignition[801]: parsing config with SHA512: 4b1fd01c16fcc933335414d89534fd6c30a1ae7832ce378e5fe36dbfd138b2633a1f6e2432462f5eefae1b28a25f01c7b8ff52fa866433270552e16440af6587 Apr 21 10:11:55.996668 unknown[801]: fetched base config from "system" Apr 21 10:11:55.997154 ignition[801]: fetch: fetch complete Apr 21 10:11:55.996689 unknown[801]: fetched base config from "system" Apr 21 10:11:55.997165 ignition[801]: fetch: fetch passed Apr 21 10:11:55.996701 unknown[801]: fetched user config from "hetzner" Apr 21 10:11:55.997237 ignition[801]: Ignition finished successfully Apr 21 10:11:56.001860 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:11:56.015279 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:11:56.041891 ignition[808]: Ignition 2.19.0 Apr 21 10:11:56.041975 ignition[808]: Stage: kargs Apr 21 10:11:56.042263 ignition[808]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:56.042285 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:56.043794 ignition[808]: kargs: kargs passed Apr 21 10:11:56.046421 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:11:56.043877 ignition[808]: Ignition finished successfully Apr 21 10:11:56.055149 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:11:56.093854 ignition[814]: Ignition 2.19.0 Apr 21 10:11:56.093877 ignition[814]: Stage: disks Apr 21 10:11:56.094206 ignition[814]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:56.094229 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:56.095362 ignition[814]: disks: disks passed Apr 21 10:11:56.097764 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:11:56.095436 ignition[814]: Ignition finished successfully Apr 21 10:11:56.099939 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:11:56.101312 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:11:56.102655 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:11:56.104069 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:11:56.105398 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:11:56.112152 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:11:56.142895 systemd-fsck[822]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 21 10:11:56.146568 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:11:56.157126 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:11:56.246233 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:11:56.248121 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:11:56.250083 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:11:56.261054 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:11:56.263740 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:11:56.266044 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 10:11:56.266988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:11:56.267138 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:11:56.277936 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (830) Apr 21 10:11:56.281452 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:11:56.292866 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:56.292888 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:11:56.292900 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:11:56.298916 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:11:56.298941 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:11:56.301827 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:11:56.306699 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:11:56.336305 coreos-metadata[832]: Apr 21 10:11:56.336 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 21 10:11:56.337511 coreos-metadata[832]: Apr 21 10:11:56.337 INFO Fetch successful Apr 21 10:11:56.338825 coreos-metadata[832]: Apr 21 10:11:56.338 INFO wrote hostname ci-4081-3-7-5-d97ac59edd to /sysroot/etc/hostname Apr 21 10:11:56.340730 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:11:56.341620 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:11:56.346146 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:11:56.349680 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:11:56.353796 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:11:56.427815 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:11:56.435980 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:11:56.438757 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:11:56.446927 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:56.465235 ignition[950]: INFO : Ignition 2.19.0 Apr 21 10:11:56.466571 ignition[950]: INFO : Stage: mount Apr 21 10:11:56.467068 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:56.467068 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:56.469405 ignition[950]: INFO : mount: mount passed Apr 21 10:11:56.469405 ignition[950]: INFO : Ignition finished successfully Apr 21 10:11:56.469475 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:11:56.476975 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:11:56.477942 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:11:56.587033 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:11:56.593288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:11:56.621960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (963) Apr 21 10:11:56.630248 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:56.630313 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:11:56.639135 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:11:56.651389 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:11:56.651438 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:11:56.656633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:11:56.692408 ignition[979]: INFO : Ignition 2.19.0 Apr 21 10:11:56.692408 ignition[979]: INFO : Stage: files Apr 21 10:11:56.694803 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:56.694803 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:56.694803 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:11:56.697169 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:11:56.697169 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:11:56.700832 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:11:56.701804 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:11:56.701804 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:11:56.701543 unknown[979]: wrote ssh authorized keys file for user: core Apr 21 10:11:56.704817 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:11:56.704817 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:11:57.013112 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:11:57.016454 systemd-networkd[798]: eth1: Gained IPv6LL Apr 21 10:11:57.400458 systemd-networkd[798]: eth0: Gained IPv6LL Apr 21 10:11:57.447280 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:11:57.448977 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:11:57.448977 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:11:57.448977 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:11:57.448977 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:11:57.448977 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:11:57.453420 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:11:57.453420 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:11:57.453420 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:11:57.453420 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:11:57.453420 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:11:57.453420 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:11:57.453420 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:11:57.453420 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:11:57.453420 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 21 10:11:57.647773 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:11:57.983093 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:11:57.983093 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:11:57.984443 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:11:57.984443 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:11:57.984443 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:11:57.984443 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:11:57.984443 ignition[979]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:11:57.988920 ignition[979]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:11:57.988920 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:11:57.988920 ignition[979]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:11:57.988920 ignition[979]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:11:57.988920 ignition[979]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:11:57.988920 ignition[979]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:11:57.988920 ignition[979]: INFO : files: files passed Apr 21 10:11:57.988920 ignition[979]: INFO : Ignition finished successfully Apr 21 10:11:57.986854 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:11:57.993025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:11:57.994827 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:11:57.998520 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:11:57.998662 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:11:58.006582 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:11:58.007613 initrd-setup-root-after-ignition[1008]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:11:58.008127 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:11:58.009055 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:11:58.009947 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:11:58.014056 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:11:58.037820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:11:58.037936 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:11:58.038886 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:11:58.039617 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:11:58.040431 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:11:58.046057 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:11:58.056161 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:11:58.062010 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:11:58.068682 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:11:58.069142 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:11:58.069561 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:11:58.069977 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:11:58.070055 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:11:58.071221 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:11:58.071997 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:11:58.072639 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:11:58.074430 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:11:58.074831 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:11:58.075900 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:11:58.076686 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:11:58.077116 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:11:58.077840 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:11:58.078550 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:11:58.079309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:11:58.079401 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:11:58.080513 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:11:58.081248 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:11:58.081933 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:11:58.082017 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:11:58.082587 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:11:58.082658 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:11:58.083677 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:11:58.083758 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:11:58.084383 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:11:58.084451 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:11:58.085042 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 10:11:58.085110 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:11:58.091395 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:11:58.092659 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:11:58.094977 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:11:58.095146 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:11:58.096036 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:11:58.096159 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:11:58.100148 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:11:58.100249 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:11:58.106275 ignition[1032]: INFO : Ignition 2.19.0 Apr 21 10:11:58.107634 ignition[1032]: INFO : Stage: umount Apr 21 10:11:58.107634 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:58.107634 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:58.107634 ignition[1032]: INFO : umount: umount passed Apr 21 10:11:58.107634 ignition[1032]: INFO : Ignition finished successfully Apr 21 10:11:58.109870 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:11:58.110008 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:11:58.113445 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:11:58.113523 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:11:58.113891 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:11:58.113958 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:11:58.114999 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:11:58.115045 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:11:58.115427 systemd[1]: Stopped target network.target - Network. Apr 21 10:11:58.115771 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:11:58.115812 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:11:58.116167 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:11:58.116499 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:11:58.123891 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:11:58.124418 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:11:58.125179 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:11:58.126554 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:11:58.126662 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:11:58.130132 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:11:58.130195 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:11:58.132069 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:11:58.132137 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:11:58.133162 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:11:58.133213 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:11:58.133996 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:11:58.134491 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:11:58.137670 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:11:58.138236 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:11:58.138338 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:11:58.138945 systemd-networkd[798]: eth1: DHCPv6 lease lost Apr 21 10:11:58.141049 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:11:58.141179 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:11:58.142245 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:11:58.142314 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:11:58.142960 systemd-networkd[798]: eth0: DHCPv6 lease lost Apr 21 10:11:58.143882 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:11:58.143950 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:11:58.145085 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:11:58.145199 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:11:58.146549 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:11:58.146609 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:11:58.153018 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:11:58.153395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:11:58.153459 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:11:58.153834 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:11:58.153870 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:11:58.154282 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:11:58.154322 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:11:58.155971 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:11:58.168046 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:11:58.168152 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:11:58.171582 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:11:58.171751 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:11:58.172655 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:11:58.172694 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:11:58.173104 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:11:58.173137 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:11:58.173808 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:11:58.173848 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:11:58.174952 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:11:58.174994 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:11:58.176339 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:11:58.176388 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:11:58.182134 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:11:58.182481 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:11:58.182529 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:11:58.182885 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:11:58.182937 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:11:58.183300 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:11:58.183337 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:11:58.183691 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:11:58.183724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:58.188497 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:11:58.188593 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:11:58.189132 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:11:58.194019 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:11:58.200214 systemd[1]: Switching root. Apr 21 10:11:58.244854 systemd-journald[188]: Journal stopped Apr 21 10:11:59.261068 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Apr 21 10:11:59.261146 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:11:59.261169 kernel: SELinux: policy capability open_perms=1 Apr 21 10:11:59.261196 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:11:59.261211 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:11:59.261234 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:11:59.261244 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:11:59.261252 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:11:59.261261 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:11:59.261269 kernel: audit: type=1403 audit(1776766318.381:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:11:59.261284 systemd[1]: Successfully loaded SELinux policy in 45.654ms. Apr 21 10:11:59.261305 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.628ms. Apr 21 10:11:59.261320 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:11:59.261330 systemd[1]: Detected virtualization kvm. Apr 21 10:11:59.261339 systemd[1]: Detected architecture x86-64. Apr 21 10:11:59.261348 systemd[1]: Detected first boot. Apr 21 10:11:59.261366 systemd[1]: Hostname set to . Apr 21 10:11:59.261379 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:11:59.261390 zram_generator::config[1076]: No configuration found. Apr 21 10:11:59.261404 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:11:59.261420 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:11:59.261429 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:11:59.261438 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:11:59.261448 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:11:59.261457 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:11:59.261466 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:11:59.261479 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:11:59.261492 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:11:59.261504 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:11:59.261513 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:11:59.261526 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:11:59.261534 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:11:59.261544 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:11:59.261552 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:11:59.261565 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:11:59.261578 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:11:59.261590 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:11:59.261599 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:11:59.261608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:11:59.261617 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:11:59.261626 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:11:59.261637 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:11:59.261646 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:11:59.261658 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:11:59.261671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:11:59.261682 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:11:59.261691 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:11:59.261699 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:11:59.261708 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:11:59.261717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:11:59.261728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:11:59.261737 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:11:59.261750 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:11:59.261763 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:11:59.261771 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:11:59.261781 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:11:59.261790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:59.261798 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:11:59.261809 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:11:59.261819 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:11:59.261830 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:11:59.261844 systemd[1]: Reached target machines.target - Containers. Apr 21 10:11:59.261855 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:11:59.261866 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:11:59.261875 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:11:59.261884 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:11:59.261892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:11:59.261922 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:11:59.261936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:11:59.261945 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:11:59.261954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:11:59.261963 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:11:59.261971 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:11:59.261983 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:11:59.261994 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:11:59.262004 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:11:59.262018 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:11:59.262030 kernel: ACPI: bus type drm_connector registered Apr 21 10:11:59.262039 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:11:59.262047 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:11:59.262057 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:11:59.262066 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:11:59.262075 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:11:59.262085 systemd[1]: Stopped verity-setup.service. Apr 21 10:11:59.262099 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:59.262111 kernel: fuse: init (API version 7.39) Apr 21 10:11:59.262120 kernel: loop: module loaded Apr 21 10:11:59.262128 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:11:59.262137 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:11:59.262170 systemd-journald[1156]: Collecting audit messages is disabled. Apr 21 10:11:59.262202 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:11:59.262214 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:11:59.262223 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:11:59.262232 systemd-journald[1156]: Journal started Apr 21 10:11:59.262249 systemd-journald[1156]: Runtime Journal (/run/log/journal/27c9a5df54734c42b89331062ed22f68) is 8.0M, max 76.3M, 68.3M free. Apr 21 10:11:58.939226 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:11:58.958667 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 10:11:58.959262 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:11:59.265969 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:11:59.266312 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:11:59.267143 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:11:59.267758 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:11:59.268502 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:11:59.268655 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:11:59.269516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:11:59.269704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:11:59.270449 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:11:59.270591 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:11:59.271471 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:11:59.271602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:11:59.272342 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:11:59.272481 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:11:59.273400 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:11:59.273571 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:11:59.274302 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:11:59.274950 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:11:59.275555 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:11:59.288042 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:11:59.294070 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:11:59.297976 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:11:59.298447 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:11:59.298983 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:11:59.300840 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:11:59.307964 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:11:59.311013 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:11:59.311712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:11:59.317223 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:11:59.323074 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:11:59.323698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:11:59.330053 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:11:59.330591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:11:59.333629 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:11:59.337112 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:11:59.340420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:11:59.343827 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:11:59.348824 systemd-journald[1156]: Time spent on flushing to /var/log/journal/27c9a5df54734c42b89331062ed22f68 is 66.923ms for 1179 entries. Apr 21 10:11:59.348824 systemd-journald[1156]: System Journal (/var/log/journal/27c9a5df54734c42b89331062ed22f68) is 8.0M, max 584.8M, 576.8M free. Apr 21 10:11:59.442279 systemd-journald[1156]: Received client request to flush runtime journal. Apr 21 10:11:59.442312 kernel: loop0: detected capacity change from 0 to 219192 Apr 21 10:11:59.350334 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:11:59.352026 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:11:59.375444 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:11:59.375927 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:11:59.385390 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:11:59.416194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:11:59.419593 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:11:59.421835 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:11:59.453859 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:11:59.459139 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:11:59.460728 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:11:59.471237 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:11:59.476378 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 21 10:11:59.476855 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 21 10:11:59.487802 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:11:59.493149 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:11:59.498184 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:11:59.503724 udevadm[1213]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:11:59.533101 kernel: loop2: detected capacity change from 0 to 8 Apr 21 10:11:59.551614 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:11:59.560266 kernel: loop3: detected capacity change from 0 to 140768 Apr 21 10:11:59.559631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:11:59.596418 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Apr 21 10:11:59.596723 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Apr 21 10:11:59.605066 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:11:59.607947 kernel: loop4: detected capacity change from 0 to 219192 Apr 21 10:11:59.628123 kernel: loop5: detected capacity change from 0 to 142488 Apr 21 10:11:59.643018 kernel: loop6: detected capacity change from 0 to 8 Apr 21 10:11:59.647942 kernel: loop7: detected capacity change from 0 to 140768 Apr 21 10:11:59.663921 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 21 10:11:59.664778 (sd-merge)[1224]: Merged extensions into '/usr'. Apr 21 10:11:59.672461 systemd[1]: Reloading requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:11:59.672841 systemd[1]: Reloading... Apr 21 10:11:59.756938 zram_generator::config[1250]: No configuration found. Apr 21 10:11:59.820718 ldconfig[1191]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:11:59.874400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:11:59.911628 systemd[1]: Reloading finished in 236 ms. Apr 21 10:11:59.938108 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:11:59.938831 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:11:59.939443 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:11:59.948073 systemd[1]: Starting ensure-sysext.service... Apr 21 10:11:59.951020 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:11:59.953927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:11:59.956376 systemd[1]: Reloading requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:11:59.956387 systemd[1]: Reloading... Apr 21 10:11:59.980379 systemd-udevd[1296]: Using default interface naming scheme 'v255'. Apr 21 10:11:59.980527 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:11:59.980803 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:11:59.981603 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:11:59.981846 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Apr 21 10:11:59.982249 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Apr 21 10:11:59.986253 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:11:59.986361 systemd-tmpfiles[1295]: Skipping /boot Apr 21 10:12:00.000601 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:12:00.001898 systemd-tmpfiles[1295]: Skipping /boot Apr 21 10:12:00.028927 zram_generator::config[1322]: No configuration found. Apr 21 10:12:00.164819 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:12:00.216931 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1325) Apr 21 10:12:00.217006 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 10:12:00.261748 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 10:12:00.266231 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:12:00.279147 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:12:00.279168 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:12:00.270273 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:12:00.270533 systemd[1]: Reloading finished in 313 ms. Apr 21 10:12:00.292135 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:12:00.292375 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:12:00.300151 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:12:00.301714 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:12:00.323059 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 21 10:12:00.334713 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:00.336668 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:12:00.340146 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:12:00.343087 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:12:00.344092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:12:00.347111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:12:00.349149 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:12:00.357107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:12:00.357681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:12:00.359517 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:12:00.364109 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:12:00.369660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:12:00.373551 kernel: EDAC MC: Ver: 3.0.0 Apr 21 10:12:00.379525 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:12:00.380534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:00.383062 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:12:00.383234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:12:00.386700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:12:00.387239 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:12:00.399936 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 21 10:12:00.404935 kernel: Console: switching to colour dummy device 80x25 Apr 21 10:12:00.404985 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 21 10:12:00.411005 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 21 10:12:00.411059 kernel: [drm] features: -context_init Apr 21 10:12:00.411071 kernel: [drm] number of scanouts: 1 Apr 21 10:12:00.412435 kernel: [drm] number of cap sets: 0 Apr 21 10:12:00.414786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:12:00.421027 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 21 10:12:00.421087 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 21 10:12:00.421099 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 10:12:00.420750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:00.422050 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:12:00.429056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:12:00.432116 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 21 10:12:00.440693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:12:00.443598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:12:00.444536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:12:00.453068 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:12:00.454508 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:12:00.457017 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:12:00.457561 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:12:00.459456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:12:00.459615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:12:00.465838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:12:00.466308 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:12:00.472344 systemd[1]: Finished ensure-sysext.service. Apr 21 10:12:00.476468 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:12:00.476668 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:12:00.481443 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:12:00.486120 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:12:00.491838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:12:00.494951 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:12:00.495482 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:12:00.495617 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:12:00.503480 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:12:00.523431 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:12:00.531128 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:12:00.531943 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:12:00.540742 augenrules[1455]: No rules Apr 21 10:12:00.538488 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:12:00.551867 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:12:00.554216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:12:00.554413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:12:00.565089 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:12:00.569068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:12:00.572094 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:12:00.572751 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:12:00.586654 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:12:00.600282 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:12:00.623525 systemd-networkd[1416]: lo: Link UP Apr 21 10:12:00.623779 systemd-networkd[1416]: lo: Gained carrier Apr 21 10:12:00.626575 systemd-networkd[1416]: Enumeration completed Apr 21 10:12:00.626724 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:12:00.629889 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:12:00.629895 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:12:00.630661 systemd-networkd[1416]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:12:00.630691 systemd-networkd[1416]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:12:00.631235 systemd-networkd[1416]: eth0: Link UP Apr 21 10:12:00.631309 systemd-networkd[1416]: eth0: Gained carrier Apr 21 10:12:00.631349 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:12:00.637173 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:12:00.639200 systemd-networkd[1416]: eth1: Link UP Apr 21 10:12:00.639204 systemd-networkd[1416]: eth1: Gained carrier Apr 21 10:12:00.639223 systemd-networkd[1416]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:12:00.654564 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:12:00.656393 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:12:00.665203 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:12:00.673964 systemd-networkd[1416]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 10:12:00.674634 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:12:00.694240 systemd-networkd[1416]: eth0: DHCPv4 address 46.62.167.148/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 10:12:00.698769 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:12:00.700272 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:12:00.702072 systemd-resolved[1417]: Positive Trust Anchors: Apr 21 10:12:00.702297 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:12:00.702362 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:12:00.706157 systemd-resolved[1417]: Using system hostname 'ci-4081-3-7-5-d97ac59edd'. Apr 21 10:12:00.708018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:12:00.708528 systemd[1]: Reached target network.target - Network. Apr 21 10:12:00.708897 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:12:00.710645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:12:00.711590 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:12:00.713355 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:12:00.715842 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:12:00.717206 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:12:00.717744 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:12:00.719150 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:12:00.719765 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:12:00.720129 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:12:00.720154 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:12:00.720477 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:12:00.722284 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:12:00.724258 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:12:00.731568 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:12:00.733219 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:12:00.733653 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:12:00.735245 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:12:00.735647 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:12:00.735676 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:12:00.736868 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:12:00.741052 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:12:00.747047 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:12:00.749265 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:12:00.751229 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:12:00.751580 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:12:00.753048 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:12:00.758993 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:12:00.761248 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 21 10:12:00.763335 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:12:00.765995 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:12:00.776142 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:12:00.777888 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:12:00.779045 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:12:00.782033 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:12:00.789594 jq[1489]: false Apr 21 10:12:00.799612 dbus-daemon[1486]: [system] SELinux support is enabled Apr 21 10:12:00.804632 coreos-metadata[1485]: Apr 21 10:12:00.791 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 21 10:12:00.804632 coreos-metadata[1485]: Apr 21 10:12:00.794 INFO Fetch successful Apr 21 10:12:00.804632 coreos-metadata[1485]: Apr 21 10:12:00.794 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 21 10:12:00.804632 coreos-metadata[1485]: Apr 21 10:12:00.794 INFO Fetch successful Apr 21 10:12:00.790003 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:12:00.792240 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:12:00.792420 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:12:00.804300 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:12:00.816858 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:12:00.816895 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:12:01.961157 systemd-resolved[1417]: Clock change detected. Flushing caches. Apr 21 10:12:01.961318 systemd-timesyncd[1439]: Contacted time server 85.121.54.197:123 (0.flatcar.pool.ntp.org). Apr 21 10:12:01.961549 systemd-timesyncd[1439]: Initial clock synchronization to Tue 2026-04-21 10:12:01.961117 UTC. Apr 21 10:12:01.962672 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:12:01.962695 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:12:01.989134 jq[1498]: true Apr 21 10:12:01.996738 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:12:01.996989 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:12:02.003848 extend-filesystems[1490]: Found loop4 Apr 21 10:12:02.004142 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:12:02.012904 update_engine[1497]: I20260421 10:12:02.009931 1497 main.cc:92] Flatcar Update Engine starting Apr 21 10:12:02.014257 extend-filesystems[1490]: Found loop5 Apr 21 10:12:02.014257 extend-filesystems[1490]: Found loop6 Apr 21 10:12:02.014257 extend-filesystems[1490]: Found loop7 Apr 21 10:12:02.014257 extend-filesystems[1490]: Found sda Apr 21 10:12:02.014257 extend-filesystems[1490]: Found sda1 Apr 21 10:12:02.014257 extend-filesystems[1490]: Found sda2 Apr 21 10:12:02.014257 extend-filesystems[1490]: Found sda3 Apr 21 10:12:02.014257 extend-filesystems[1490]: Found usr Apr 21 10:12:02.014257 extend-filesystems[1490]: Found sda4 Apr 21 10:12:02.014257 extend-filesystems[1490]: Found sda6 Apr 21 10:12:02.014257 extend-filesystems[1490]: Found sda7 Apr 21 10:12:02.014257 extend-filesystems[1490]: Found sda9 Apr 21 10:12:02.014257 extend-filesystems[1490]: Checking size of /dev/sda9 Apr 21 10:12:02.044691 jq[1515]: true Apr 21 10:12:02.044802 update_engine[1497]: I20260421 10:12:02.020031 1497 update_check_scheduler.cc:74] Next update check in 9m1s Apr 21 10:12:02.047205 tar[1512]: linux-amd64/LICENSE Apr 21 10:12:02.047205 tar[1512]: linux-amd64/helm Apr 21 10:12:02.024503 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:12:02.042997 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:12:02.051516 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:12:02.051857 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:12:02.053493 systemd-logind[1496]: New seat seat0. Apr 21 10:12:02.057310 systemd-logind[1496]: Watching system buttons on /dev/input/event3 (Power Button) Apr 21 10:12:02.060420 extend-filesystems[1490]: Resized partition /dev/sda9 Apr 21 10:12:02.057334 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:12:02.061201 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:12:02.069390 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:12:02.080901 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 21 10:12:02.139451 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:12:02.143536 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:12:02.170308 bash[1557]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:12:02.172643 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:12:02.185882 systemd[1]: Starting sshkeys.service... Apr 21 10:12:02.201382 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1337) Apr 21 10:12:02.254388 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:12:02.265574 containerd[1505]: time="2026-04-21T10:12:02.263722392Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:12:02.266149 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:12:02.294516 containerd[1505]: time="2026-04-21T10:12:02.293157904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:02.295411 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:12:02.296745 containerd[1505]: time="2026-04-21T10:12:02.296717095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:02.296745 containerd[1505]: time="2026-04-21T10:12:02.296741815Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:12:02.296895 containerd[1505]: time="2026-04-21T10:12:02.296753525Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:12:02.297225 containerd[1505]: time="2026-04-21T10:12:02.297008775Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:12:02.297225 containerd[1505]: time="2026-04-21T10:12:02.297024875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:02.297225 containerd[1505]: time="2026-04-21T10:12:02.297076256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:02.297225 containerd[1505]: time="2026-04-21T10:12:02.297085066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:02.297766 containerd[1505]: time="2026-04-21T10:12:02.297540446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:02.297766 containerd[1505]: time="2026-04-21T10:12:02.297555716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:02.297766 containerd[1505]: time="2026-04-21T10:12:02.297565606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:02.297766 containerd[1505]: time="2026-04-21T10:12:02.297573396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:02.297766 containerd[1505]: time="2026-04-21T10:12:02.297645316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:02.298676 containerd[1505]: time="2026-04-21T10:12:02.297863276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:12:02.298676 containerd[1505]: time="2026-04-21T10:12:02.298568786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:12:02.298676 containerd[1505]: time="2026-04-21T10:12:02.298580706Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:12:02.298676 containerd[1505]: time="2026-04-21T10:12:02.298658396Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:12:02.299237 containerd[1505]: time="2026-04-21T10:12:02.298694386Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:12:02.314042 coreos-metadata[1566]: Apr 21 10:12:02.313 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 21 10:12:02.316541 coreos-metadata[1566]: Apr 21 10:12:02.315 INFO Fetch successful Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315476583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315534403Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315547873Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315561873Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315572543Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315700553Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315862313Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315946463Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315962043Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315971483Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315983403Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.315992803Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.316002563Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:12:02.316676 containerd[1505]: time="2026-04-21T10:12:02.316012253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316022853Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316032533Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316046033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316055503Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316077583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316087663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316096203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316105423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316114523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316124123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316132413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316141633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316150783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323005 containerd[1505]: time="2026-04-21T10:12:02.316160703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.318904 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316189463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316198523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316215923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316226873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316240953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316250004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316257664Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316310194Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316324134Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316331714Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316339644Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316346154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316354314Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:12:02.323357 containerd[1505]: time="2026-04-21T10:12:02.316365584Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:12:02.322038 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:12:02.323768 containerd[1505]: time="2026-04-21T10:12:02.316372644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.316603104Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.316648254Z" level=info msg="Connect containerd service" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.316677254Z" level=info msg="using legacy CRI server" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.316683304Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.316751834Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317216194Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317462874Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317503124Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317538724Z" level=info msg="Start subscribing containerd event" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317572164Z" level=info msg="Start recovering state" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317614204Z" level=info msg="Start event monitor" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317620974Z" level=info msg="Start snapshots syncer" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317627594Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317633314Z" level=info msg="Start streaming server" Apr 21 10:12:02.323786 containerd[1505]: time="2026-04-21T10:12:02.317681664Z" level=info msg="containerd successfully booted in 0.055134s" Apr 21 10:12:02.324121 unknown[1566]: wrote ssh authorized keys file for user: core Apr 21 10:12:02.336430 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:12:02.348107 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:12:02.357739 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:12:02.357979 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:12:02.364102 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:12:02.376133 update-ssh-keys[1580]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:12:02.376407 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:12:02.380884 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:12:02.383854 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 21 10:12:02.384788 systemd[1]: Finished sshkeys.service. Apr 21 10:12:02.395218 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:12:02.399189 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:12:02.399766 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:12:02.411176 extend-filesystems[1534]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 10:12:02.411176 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 21 10:12:02.411176 extend-filesystems[1534]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 21 10:12:02.417871 extend-filesystems[1490]: Resized filesystem in /dev/sda9 Apr 21 10:12:02.417871 extend-filesystems[1490]: Found sr0 Apr 21 10:12:02.414649 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:12:02.414897 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:12:02.646446 tar[1512]: linux-amd64/README.md Apr 21 10:12:02.661637 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:12:03.343149 systemd-networkd[1416]: eth0: Gained IPv6LL Apr 21 10:12:03.348721 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:12:03.351619 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:12:03.364033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:12:03.368412 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:12:03.391240 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:12:03.791029 systemd-networkd[1416]: eth1: Gained IPv6LL Apr 21 10:12:04.043575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:12:04.044325 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:12:04.047094 systemd[1]: Startup finished in 1.569s (kernel) + 5.639s (initrd) + 4.566s (userspace) = 11.775s. Apr 21 10:12:04.049108 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:12:04.430397 kubelet[1614]: E0421 10:12:04.430334 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:12:04.433554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:12:04.433738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:12:07.901259 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:12:07.907672 systemd[1]: Started sshd@0-46.62.167.148:22-50.85.169.122:51906.service - OpenSSH per-connection server daemon (50.85.169.122:51906). Apr 21 10:12:08.143004 sshd[1626]: Accepted publickey for core from 50.85.169.122 port 51906 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:12:08.146734 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:08.160633 systemd-logind[1496]: New session 1 of user core. Apr 21 10:12:08.163224 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:12:08.172324 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:12:08.185873 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:12:08.192231 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:12:08.196487 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:12:08.286043 systemd[1630]: Queued start job for default target default.target. Apr 21 10:12:08.297187 systemd[1630]: Created slice app.slice - User Application Slice. Apr 21 10:12:08.297216 systemd[1630]: Reached target paths.target - Paths. Apr 21 10:12:08.297230 systemd[1630]: Reached target timers.target - Timers. Apr 21 10:12:08.299151 systemd[1630]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:12:08.321676 systemd[1630]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:12:08.321806 systemd[1630]: Reached target sockets.target - Sockets. Apr 21 10:12:08.321835 systemd[1630]: Reached target basic.target - Basic System. Apr 21 10:12:08.321878 systemd[1630]: Reached target default.target - Main User Target. Apr 21 10:12:08.321910 systemd[1630]: Startup finished in 119ms. Apr 21 10:12:08.322045 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:12:08.329945 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:12:08.520239 systemd[1]: Started sshd@1-46.62.167.148:22-50.85.169.122:51914.service - OpenSSH per-connection server daemon (50.85.169.122:51914). Apr 21 10:12:08.729538 sshd[1641]: Accepted publickey for core from 50.85.169.122 port 51914 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:12:08.730994 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:08.736869 systemd-logind[1496]: New session 2 of user core. Apr 21 10:12:08.760176 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:12:08.912244 sshd[1641]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:08.919670 systemd[1]: sshd@1-46.62.167.148:22-50.85.169.122:51914.service: Deactivated successfully. Apr 21 10:12:08.923460 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:12:08.924619 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:12:08.926489 systemd-logind[1496]: Removed session 2. Apr 21 10:12:08.954165 systemd[1]: Started sshd@2-46.62.167.148:22-50.85.169.122:51920.service - OpenSSH per-connection server daemon (50.85.169.122:51920). Apr 21 10:12:09.187964 sshd[1648]: Accepted publickey for core from 50.85.169.122 port 51920 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:12:09.190661 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:09.199243 systemd-logind[1496]: New session 3 of user core. Apr 21 10:12:09.213125 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:12:09.355971 sshd[1648]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:09.362921 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:12:09.364260 systemd[1]: sshd@2-46.62.167.148:22-50.85.169.122:51920.service: Deactivated successfully. Apr 21 10:12:09.367958 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:12:09.369782 systemd-logind[1496]: Removed session 3. Apr 21 10:12:09.397760 systemd[1]: Started sshd@3-46.62.167.148:22-50.85.169.122:51930.service - OpenSSH per-connection server daemon (50.85.169.122:51930). Apr 21 10:12:09.623621 sshd[1655]: Accepted publickey for core from 50.85.169.122 port 51930 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:12:09.624624 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:09.631666 systemd-logind[1496]: New session 4 of user core. Apr 21 10:12:09.636955 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:12:09.796484 sshd[1655]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:09.801766 systemd[1]: sshd@3-46.62.167.148:22-50.85.169.122:51930.service: Deactivated successfully. Apr 21 10:12:09.806297 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:12:09.809047 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:12:09.811617 systemd-logind[1496]: Removed session 4. Apr 21 10:12:09.846205 systemd[1]: Started sshd@4-46.62.167.148:22-50.85.169.122:38702.service - OpenSSH per-connection server daemon (50.85.169.122:38702). Apr 21 10:12:10.076289 sshd[1662]: Accepted publickey for core from 50.85.169.122 port 38702 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:12:10.079336 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:10.086904 systemd-logind[1496]: New session 5 of user core. Apr 21 10:12:10.091053 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:12:10.231141 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:12:10.231879 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:12:10.253084 sudo[1665]: pam_unix(sudo:session): session closed for user root Apr 21 10:12:10.285415 sshd[1662]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:10.291387 systemd[1]: sshd@4-46.62.167.148:22-50.85.169.122:38702.service: Deactivated successfully. Apr 21 10:12:10.295271 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:12:10.298525 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:12:10.300656 systemd-logind[1496]: Removed session 5. Apr 21 10:12:10.331226 systemd[1]: Started sshd@5-46.62.167.148:22-50.85.169.122:38708.service - OpenSSH per-connection server daemon (50.85.169.122:38708). Apr 21 10:12:10.561083 sshd[1670]: Accepted publickey for core from 50.85.169.122 port 38708 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:12:10.564420 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:10.574925 systemd-logind[1496]: New session 6 of user core. Apr 21 10:12:10.584110 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:12:10.702937 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:12:10.703955 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:12:10.710014 sudo[1674]: pam_unix(sudo:session): session closed for user root Apr 21 10:12:10.716428 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:12:10.716737 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:12:10.738056 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:12:10.755755 auditctl[1677]: No rules Apr 21 10:12:10.756319 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:12:10.756547 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:12:10.767160 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:12:10.804988 augenrules[1695]: No rules Apr 21 10:12:10.807350 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:12:10.809032 sudo[1673]: pam_unix(sudo:session): session closed for user root Apr 21 10:12:10.841005 sshd[1670]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:10.844677 systemd[1]: sshd@5-46.62.167.148:22-50.85.169.122:38708.service: Deactivated successfully. Apr 21 10:12:10.846291 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:12:10.848043 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:12:10.849043 systemd-logind[1496]: Removed session 6. Apr 21 10:12:10.893398 systemd[1]: Started sshd@6-46.62.167.148:22-50.85.169.122:38714.service - OpenSSH per-connection server daemon (50.85.169.122:38714). Apr 21 10:12:11.093733 sshd[1703]: Accepted publickey for core from 50.85.169.122 port 38714 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:12:11.096287 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:11.101883 systemd-logind[1496]: New session 7 of user core. Apr 21 10:12:11.107999 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:12:11.237416 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:12:11.238256 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:12:11.574323 (dockerd)[1722]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:12:11.575033 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:12:11.807588 dockerd[1722]: time="2026-04-21T10:12:11.807524516Z" level=info msg="Starting up" Apr 21 10:12:11.867221 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2976827452-merged.mount: Deactivated successfully. Apr 21 10:12:11.901712 dockerd[1722]: time="2026-04-21T10:12:11.901648716Z" level=info msg="Loading containers: start." Apr 21 10:12:12.023890 kernel: Initializing XFRM netlink socket Apr 21 10:12:12.101125 systemd-networkd[1416]: docker0: Link UP Apr 21 10:12:12.118567 dockerd[1722]: time="2026-04-21T10:12:12.118359036Z" level=info msg="Loading containers: done." Apr 21 10:12:12.136679 dockerd[1722]: time="2026-04-21T10:12:12.136623824Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:12:12.136857 dockerd[1722]: time="2026-04-21T10:12:12.136752614Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:12:12.136911 dockerd[1722]: time="2026-04-21T10:12:12.136888604Z" level=info msg="Daemon has completed initialization" Apr 21 10:12:12.168564 dockerd[1722]: time="2026-04-21T10:12:12.168502277Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:12:12.169572 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:12:12.592300 containerd[1505]: time="2026-04-21T10:12:12.592224063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 21 10:12:13.227072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003804450.mount: Deactivated successfully. Apr 21 10:12:14.407057 containerd[1505]: time="2026-04-21T10:12:14.406996269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:14.408802 containerd[1505]: time="2026-04-21T10:12:14.408652480Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100614" Apr 21 10:12:14.411855 containerd[1505]: time="2026-04-21T10:12:14.411825191Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:14.414757 containerd[1505]: time="2026-04-21T10:12:14.414512192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:14.415274 containerd[1505]: time="2026-04-21T10:12:14.415248273Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.82297238s" Apr 21 10:12:14.415307 containerd[1505]: time="2026-04-21T10:12:14.415279233Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 21 10:12:14.416045 containerd[1505]: time="2026-04-21T10:12:14.416024363Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 21 10:12:14.640019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:12:14.647122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:12:14.823106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:12:14.823166 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:12:14.853553 kubelet[1929]: E0421 10:12:14.853508 1929 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:12:14.859016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:12:14.859423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:12:15.635933 containerd[1505]: time="2026-04-21T10:12:15.635880851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:15.637068 containerd[1505]: time="2026-04-21T10:12:15.636930011Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252760" Apr 21 10:12:15.638116 containerd[1505]: time="2026-04-21T10:12:15.637936382Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:15.641187 containerd[1505]: time="2026-04-21T10:12:15.640390403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:15.641187 containerd[1505]: time="2026-04-21T10:12:15.641159143Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.22506037s" Apr 21 10:12:15.641187 containerd[1505]: time="2026-04-21T10:12:15.641184793Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 21 10:12:15.641845 containerd[1505]: time="2026-04-21T10:12:15.641823303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 21 10:12:16.684202 containerd[1505]: time="2026-04-21T10:12:16.684151398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:16.687593 containerd[1505]: time="2026-04-21T10:12:16.687472349Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810913" Apr 21 10:12:16.689727 containerd[1505]: time="2026-04-21T10:12:16.688577759Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:16.690914 containerd[1505]: time="2026-04-21T10:12:16.690671860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:16.691575 containerd[1505]: time="2026-04-21T10:12:16.691552611Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.049706718s" Apr 21 10:12:16.691606 containerd[1505]: time="2026-04-21T10:12:16.691578531Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 21 10:12:16.692265 containerd[1505]: time="2026-04-21T10:12:16.692097461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 21 10:12:17.755327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2122889261.mount: Deactivated successfully. Apr 21 10:12:18.007203 containerd[1505]: time="2026-04-21T10:12:18.007090759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:18.008021 containerd[1505]: time="2026-04-21T10:12:18.007991779Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972982" Apr 21 10:12:18.008723 containerd[1505]: time="2026-04-21T10:12:18.008693319Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:18.010033 containerd[1505]: time="2026-04-21T10:12:18.010004730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:18.010689 containerd[1505]: time="2026-04-21T10:12:18.010397460Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.318267499s" Apr 21 10:12:18.010689 containerd[1505]: time="2026-04-21T10:12:18.010422990Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 21 10:12:18.010868 containerd[1505]: time="2026-04-21T10:12:18.010845300Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 21 10:12:18.566927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253005800.mount: Deactivated successfully. Apr 21 10:12:19.455365 containerd[1505]: time="2026-04-21T10:12:19.455306752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:19.456349 containerd[1505]: time="2026-04-21T10:12:19.456318632Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Apr 21 10:12:19.457409 containerd[1505]: time="2026-04-21T10:12:19.457378423Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:19.461712 containerd[1505]: time="2026-04-21T10:12:19.461682174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:19.462554 containerd[1505]: time="2026-04-21T10:12:19.462453285Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.451584895s" Apr 21 10:12:19.462554 containerd[1505]: time="2026-04-21T10:12:19.462477995Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 21 10:12:19.463216 containerd[1505]: time="2026-04-21T10:12:19.463194675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:12:19.962006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236374874.mount: Deactivated successfully. Apr 21 10:12:19.970266 containerd[1505]: time="2026-04-21T10:12:19.970210646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:19.971561 containerd[1505]: time="2026-04-21T10:12:19.971497467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Apr 21 10:12:19.972325 containerd[1505]: time="2026-04-21T10:12:19.972278477Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:19.975668 containerd[1505]: time="2026-04-21T10:12:19.975629798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:19.977678 containerd[1505]: time="2026-04-21T10:12:19.977471809Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 514.222444ms" Apr 21 10:12:19.977678 containerd[1505]: time="2026-04-21T10:12:19.977539909Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:12:19.978119 containerd[1505]: time="2026-04-21T10:12:19.978090519Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 21 10:12:20.551660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210706824.mount: Deactivated successfully. Apr 21 10:12:21.432363 containerd[1505]: time="2026-04-21T10:12:21.431560355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:21.432363 containerd[1505]: time="2026-04-21T10:12:21.432329395Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874917" Apr 21 10:12:21.432882 containerd[1505]: time="2026-04-21T10:12:21.432863935Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:21.434904 containerd[1505]: time="2026-04-21T10:12:21.434886256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:21.435727 containerd[1505]: time="2026-04-21T10:12:21.435688467Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.457578458s" Apr 21 10:12:21.435768 containerd[1505]: time="2026-04-21T10:12:21.435729207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 21 10:12:23.378953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:12:23.389559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:12:23.417463 systemd[1]: Reloading requested from client PID 2100 ('systemctl') (unit session-7.scope)... Apr 21 10:12:23.417484 systemd[1]: Reloading... Apr 21 10:12:23.506849 zram_generator::config[2138]: No configuration found. Apr 21 10:12:23.603327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:12:23.674809 systemd[1]: Reloading finished in 256 ms. Apr 21 10:12:23.723051 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:12:23.723132 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:12:23.723355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:12:23.724805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:12:23.851337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:12:23.855669 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:12:23.886070 kubelet[2194]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:12:23.886070 kubelet[2194]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:12:23.886438 kubelet[2194]: I0421 10:12:23.886090 2194 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:12:24.046019 kubelet[2194]: I0421 10:12:24.045644 2194 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 10:12:24.046019 kubelet[2194]: I0421 10:12:24.045666 2194 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:12:24.047468 kubelet[2194]: I0421 10:12:24.047450 2194 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:12:24.047468 kubelet[2194]: I0421 10:12:24.047466 2194 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:12:24.047687 kubelet[2194]: I0421 10:12:24.047670 2194 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:12:24.060688 kubelet[2194]: E0421 10:12:24.060629 2194 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.62.167.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.167.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:12:24.062850 kubelet[2194]: I0421 10:12:24.062724 2194 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:12:24.066579 kubelet[2194]: E0421 10:12:24.066540 2194 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:12:24.066639 kubelet[2194]: I0421 10:12:24.066605 2194 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:12:24.072617 kubelet[2194]: I0421 10:12:24.072333 2194 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:12:24.073712 kubelet[2194]: I0421 10:12:24.073665 2194 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:12:24.073850 kubelet[2194]: I0421 10:12:24.073699 2194 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-5-d97ac59edd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:12:24.073850 kubelet[2194]: I0421 10:12:24.073844 2194 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:12:24.073850 kubelet[2194]: I0421 10:12:24.073852 2194 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 10:12:24.074016 kubelet[2194]: I0421 10:12:24.073941 2194 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:12:24.076066 kubelet[2194]: I0421 10:12:24.076037 2194 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:12:24.076257 kubelet[2194]: I0421 10:12:24.076220 2194 kubelet.go:475] "Attempting to sync node with API server" Apr 21 10:12:24.076257 kubelet[2194]: I0421 10:12:24.076234 2194 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:12:24.076257 kubelet[2194]: I0421 10:12:24.076251 2194 kubelet.go:387] "Adding apiserver pod source" Apr 21 10:12:24.076631 kubelet[2194]: I0421 10:12:24.076267 2194 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:12:24.077900 kubelet[2194]: E0421 10:12:24.077829 2194 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.167.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.167.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:12:24.077900 kubelet[2194]: E0421 10:12:24.077899 2194 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.167.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-5-d97ac59edd&limit=500&resourceVersion=0\": dial tcp 46.62.167.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:12:24.079860 kubelet[2194]: I0421 10:12:24.078256 2194 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:12:24.079860 kubelet[2194]: I0421 10:12:24.078625 2194 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:12:24.079860 kubelet[2194]: I0421 10:12:24.078640 2194 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:12:24.079860 kubelet[2194]: W0421 10:12:24.078690 2194 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:12:24.083384 kubelet[2194]: I0421 10:12:24.082372 2194 server.go:1262] "Started kubelet" Apr 21 10:12:24.087440 kubelet[2194]: I0421 10:12:24.087411 2194 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:12:24.087711 kubelet[2194]: I0421 10:12:24.087655 2194 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:12:24.087774 kubelet[2194]: I0421 10:12:24.087719 2194 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:12:24.088102 kubelet[2194]: I0421 10:12:24.088070 2194 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:12:24.090885 kubelet[2194]: I0421 10:12:24.089710 2194 server.go:310] "Adding debug handlers to kubelet server" Apr 21 10:12:24.092130 kubelet[2194]: I0421 10:12:24.091991 2194 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:12:24.093723 kubelet[2194]: E0421 10:12:24.092670 2194 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.167.148:6443/api/v1/namespaces/default/events\": dial tcp 46.62.167.148:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-5-d97ac59edd.18a8579383e5e7ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-5-d97ac59edd,UID:ci-4081-3-7-5-d97ac59edd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-5-d97ac59edd,},FirstTimestamp:2026-04-21 10:12:24.082352109 +0000 UTC m=+0.223770664,LastTimestamp:2026-04-21 10:12:24.082352109 +0000 UTC m=+0.223770664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-5-d97ac59edd,}" Apr 21 10:12:24.095038 kubelet[2194]: I0421 10:12:24.094994 2194 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:12:24.098524 kubelet[2194]: E0421 10:12:24.098490 2194 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:12:24.099344 kubelet[2194]: E0421 10:12:24.098675 2194 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-5-d97ac59edd\" not found" Apr 21 10:12:24.099344 kubelet[2194]: I0421 10:12:24.098697 2194 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 10:12:24.099344 kubelet[2194]: I0421 10:12:24.098828 2194 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:12:24.099344 kubelet[2194]: I0421 10:12:24.098863 2194 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:12:24.101155 kubelet[2194]: E0421 10:12:24.101125 2194 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.167.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.167.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:12:24.101756 kubelet[2194]: E0421 10:12:24.101726 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.167.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-5-d97ac59edd?timeout=10s\": dial tcp 46.62.167.148:6443: connect: connection refused" interval="200ms" Apr 21 10:12:24.110549 kubelet[2194]: I0421 10:12:24.110513 2194 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:12:24.110549 kubelet[2194]: I0421 10:12:24.110540 2194 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:12:24.110693 kubelet[2194]: I0421 10:12:24.110666 2194 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:12:24.119661 kubelet[2194]: I0421 10:12:24.119529 2194 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:12:24.120757 kubelet[2194]: I0421 10:12:24.120741 2194 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:12:24.120836 kubelet[2194]: I0421 10:12:24.120809 2194 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 10:12:24.120897 kubelet[2194]: I0421 10:12:24.120888 2194 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 10:12:24.121166 kubelet[2194]: E0421 10:12:24.120974 2194 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:12:24.126556 kubelet[2194]: E0421 10:12:24.126531 2194 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.167.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.167.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:12:24.126720 kubelet[2194]: I0421 10:12:24.126708 2194 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:12:24.126778 kubelet[2194]: I0421 10:12:24.126766 2194 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:12:24.126857 kubelet[2194]: I0421 10:12:24.126848 2194 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:12:24.129313 kubelet[2194]: I0421 10:12:24.129299 2194 policy_none.go:49] "None policy: Start" Apr 21 10:12:24.129367 kubelet[2194]: I0421 10:12:24.129360 2194 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:12:24.129400 kubelet[2194]: I0421 10:12:24.129393 2194 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:12:24.130646 kubelet[2194]: I0421 10:12:24.130633 2194 policy_none.go:47] "Start" Apr 21 10:12:24.134246 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:12:24.145544 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:12:24.149642 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:12:24.159190 kubelet[2194]: E0421 10:12:24.158712 2194 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:12:24.159190 kubelet[2194]: I0421 10:12:24.158896 2194 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:12:24.159190 kubelet[2194]: I0421 10:12:24.158905 2194 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:12:24.159190 kubelet[2194]: I0421 10:12:24.159099 2194 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:12:24.160967 kubelet[2194]: E0421 10:12:24.160946 2194 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:12:24.161090 kubelet[2194]: E0421 10:12:24.161073 2194 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-5-d97ac59edd\" not found" Apr 21 10:12:24.241955 systemd[1]: Created slice kubepods-burstable-podacd7cb6fee4c7b4fe575b5ad77bb45ef.slice - libcontainer container kubepods-burstable-podacd7cb6fee4c7b4fe575b5ad77bb45ef.slice. Apr 21 10:12:24.262289 kubelet[2194]: I0421 10:12:24.261575 2194 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.262727 kubelet[2194]: E0421 10:12:24.262680 2194 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5-d97ac59edd\" not found" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.263365 kubelet[2194]: E0421 10:12:24.263283 2194 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.167.148:6443/api/v1/nodes\": dial tcp 46.62.167.148:6443: connect: connection refused" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.269853 systemd[1]: Created slice kubepods-burstable-podf1a436e98b5719facb489996e06aae0a.slice - libcontainer container kubepods-burstable-podf1a436e98b5719facb489996e06aae0a.slice. Apr 21 10:12:24.280511 kubelet[2194]: E0421 10:12:24.280430 2194 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5-d97ac59edd\" not found" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.288037 systemd[1]: Created slice kubepods-burstable-pod339e1aa52735046c653f729accffb501.slice - libcontainer container kubepods-burstable-pod339e1aa52735046c653f729accffb501.slice. Apr 21 10:12:24.293207 kubelet[2194]: E0421 10:12:24.293172 2194 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5-d97ac59edd\" not found" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.300735 kubelet[2194]: I0421 10:12:24.300420 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.300735 kubelet[2194]: I0421 10:12:24.300516 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.300735 kubelet[2194]: I0421 10:12:24.300545 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acd7cb6fee4c7b4fe575b5ad77bb45ef-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-5-d97ac59edd\" (UID: \"acd7cb6fee4c7b4fe575b5ad77bb45ef\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.300735 kubelet[2194]: I0421 10:12:24.300569 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acd7cb6fee4c7b4fe575b5ad77bb45ef-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-5-d97ac59edd\" (UID: \"acd7cb6fee4c7b4fe575b5ad77bb45ef\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.300735 kubelet[2194]: I0421 10:12:24.300616 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/339e1aa52735046c653f729accffb501-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-5-d97ac59edd\" (UID: \"339e1aa52735046c653f729accffb501\") " pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.301068 kubelet[2194]: I0421 10:12:24.300645 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acd7cb6fee4c7b4fe575b5ad77bb45ef-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-5-d97ac59edd\" (UID: \"acd7cb6fee4c7b4fe575b5ad77bb45ef\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.301068 kubelet[2194]: I0421 10:12:24.300681 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.302556 kubelet[2194]: I0421 10:12:24.301921 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.302556 kubelet[2194]: I0421 10:12:24.302004 2194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.303575 kubelet[2194]: E0421 10:12:24.303538 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.167.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-5-d97ac59edd?timeout=10s\": dial tcp 46.62.167.148:6443: connect: connection refused" interval="400ms" Apr 21 10:12:24.466878 kubelet[2194]: I0421 10:12:24.466338 2194 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.467255 kubelet[2194]: E0421 10:12:24.467195 2194 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.167.148:6443/api/v1/nodes\": dial tcp 46.62.167.148:6443: connect: connection refused" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.567866 containerd[1505]: time="2026-04-21T10:12:24.567634501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-5-d97ac59edd,Uid:acd7cb6fee4c7b4fe575b5ad77bb45ef,Namespace:kube-system,Attempt:0,}" Apr 21 10:12:24.584674 containerd[1505]: time="2026-04-21T10:12:24.584572818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-5-d97ac59edd,Uid:f1a436e98b5719facb489996e06aae0a,Namespace:kube-system,Attempt:0,}" Apr 21 10:12:24.602159 containerd[1505]: time="2026-04-21T10:12:24.601640065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-5-d97ac59edd,Uid:339e1aa52735046c653f729accffb501,Namespace:kube-system,Attempt:0,}" Apr 21 10:12:24.704258 kubelet[2194]: E0421 10:12:24.704173 2194 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.167.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-5-d97ac59edd?timeout=10s\": dial tcp 46.62.167.148:6443: connect: connection refused" interval="800ms" Apr 21 10:12:24.871323 kubelet[2194]: I0421 10:12:24.871160 2194 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.872325 kubelet[2194]: E0421 10:12:24.872242 2194 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.167.148:6443/api/v1/nodes\": dial tcp 46.62.167.148:6443: connect: connection refused" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:24.895338 kubelet[2194]: E0421 10:12:24.895106 2194 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.167.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-5-d97ac59edd&limit=500&resourceVersion=0\": dial tcp 46.62.167.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:12:25.084092 kubelet[2194]: E0421 10:12:25.084036 2194 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.167.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.167.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:12:25.090778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739284520.mount: Deactivated successfully. Apr 21 10:12:25.098291 containerd[1505]: time="2026-04-21T10:12:25.098224692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:12:25.101168 containerd[1505]: time="2026-04-21T10:12:25.101102873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 21 10:12:25.102230 containerd[1505]: time="2026-04-21T10:12:25.102176044Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:12:25.104024 containerd[1505]: time="2026-04-21T10:12:25.103882004Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:12:25.105637 containerd[1505]: time="2026-04-21T10:12:25.105378175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:12:25.108072 containerd[1505]: time="2026-04-21T10:12:25.107981206Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:12:25.108157 containerd[1505]: time="2026-04-21T10:12:25.108113616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:12:25.114018 containerd[1505]: time="2026-04-21T10:12:25.113966878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:12:25.115340 containerd[1505]: time="2026-04-21T10:12:25.115300959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.554078ms" Apr 21 10:12:25.116787 containerd[1505]: time="2026-04-21T10:12:25.116742920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 532.075282ms" Apr 21 10:12:25.117242 containerd[1505]: time="2026-04-21T10:12:25.117182530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.385795ms" Apr 21 10:12:25.224990 containerd[1505]: time="2026-04-21T10:12:25.224898605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:25.224990 containerd[1505]: time="2026-04-21T10:12:25.224945255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:25.225401 containerd[1505]: time="2026-04-21T10:12:25.225120315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:25.225401 containerd[1505]: time="2026-04-21T10:12:25.225150475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:25.225401 containerd[1505]: time="2026-04-21T10:12:25.225170225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:25.225401 containerd[1505]: time="2026-04-21T10:12:25.225230225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:25.225401 containerd[1505]: time="2026-04-21T10:12:25.225051295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:25.225401 containerd[1505]: time="2026-04-21T10:12:25.225224205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:25.228825 containerd[1505]: time="2026-04-21T10:12:25.226915846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:25.228825 containerd[1505]: time="2026-04-21T10:12:25.226950806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:25.228825 containerd[1505]: time="2026-04-21T10:12:25.226961396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:25.228825 containerd[1505]: time="2026-04-21T10:12:25.227011946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:25.246970 systemd[1]: Started cri-containerd-b4590dbee6f40a4c443e7cf86176776a77683fe782e930cee103d03690c66d44.scope - libcontainer container b4590dbee6f40a4c443e7cf86176776a77683fe782e930cee103d03690c66d44. Apr 21 10:12:25.257938 systemd[1]: Started cri-containerd-07ec4917afb8c7e896202317476704aec7dd0054028c25b2fb8a938a7987638c.scope - libcontainer container 07ec4917afb8c7e896202317476704aec7dd0054028c25b2fb8a938a7987638c. Apr 21 10:12:25.259617 systemd[1]: Started cri-containerd-c02e61979ab64b28522719faa1deca5122a6f789a53211254a016ae74e8ddc8b.scope - libcontainer container c02e61979ab64b28522719faa1deca5122a6f789a53211254a016ae74e8ddc8b. Apr 21 10:12:25.302305 containerd[1505]: time="2026-04-21T10:12:25.302257407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-5-d97ac59edd,Uid:f1a436e98b5719facb489996e06aae0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c02e61979ab64b28522719faa1deca5122a6f789a53211254a016ae74e8ddc8b\"" Apr 21 10:12:25.308047 containerd[1505]: time="2026-04-21T10:12:25.308019229Z" level=info msg="CreateContainer within sandbox \"c02e61979ab64b28522719faa1deca5122a6f789a53211254a016ae74e8ddc8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:12:25.311081 containerd[1505]: time="2026-04-21T10:12:25.310919850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-5-d97ac59edd,Uid:acd7cb6fee4c7b4fe575b5ad77bb45ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4590dbee6f40a4c443e7cf86176776a77683fe782e930cee103d03690c66d44\"" Apr 21 10:12:25.316288 containerd[1505]: time="2026-04-21T10:12:25.316267433Z" level=info msg="CreateContainer within sandbox \"b4590dbee6f40a4c443e7cf86176776a77683fe782e930cee103d03690c66d44\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:12:25.319479 containerd[1505]: time="2026-04-21T10:12:25.319460964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-5-d97ac59edd,Uid:339e1aa52735046c653f729accffb501,Namespace:kube-system,Attempt:0,} returns sandbox id \"07ec4917afb8c7e896202317476704aec7dd0054028c25b2fb8a938a7987638c\"" Apr 21 10:12:25.322734 containerd[1505]: time="2026-04-21T10:12:25.322717695Z" level=info msg="CreateContainer within sandbox \"07ec4917afb8c7e896202317476704aec7dd0054028c25b2fb8a938a7987638c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:12:25.331683 containerd[1505]: time="2026-04-21T10:12:25.331654579Z" level=info msg="CreateContainer within sandbox \"c02e61979ab64b28522719faa1deca5122a6f789a53211254a016ae74e8ddc8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6\"" Apr 21 10:12:25.332171 containerd[1505]: time="2026-04-21T10:12:25.332140419Z" level=info msg="StartContainer for \"8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6\"" Apr 21 10:12:25.335541 containerd[1505]: time="2026-04-21T10:12:25.335457381Z" level=info msg="CreateContainer within sandbox \"b4590dbee6f40a4c443e7cf86176776a77683fe782e930cee103d03690c66d44\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"56ec0664bc2bd6372320c63a2ca3a48887aa0f08a1f4964d4af3f8aa706cb542\"" Apr 21 10:12:25.335724 containerd[1505]: time="2026-04-21T10:12:25.335711291Z" level=info msg="StartContainer for \"56ec0664bc2bd6372320c63a2ca3a48887aa0f08a1f4964d4af3f8aa706cb542\"" Apr 21 10:12:25.343090 containerd[1505]: time="2026-04-21T10:12:25.343064054Z" level=info msg="CreateContainer within sandbox \"07ec4917afb8c7e896202317476704aec7dd0054028c25b2fb8a938a7987638c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285\"" Apr 21 10:12:25.343379 containerd[1505]: time="2026-04-21T10:12:25.343358944Z" level=info msg="StartContainer for \"9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285\"" Apr 21 10:12:25.358341 systemd[1]: Started cri-containerd-8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6.scope - libcontainer container 8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6. Apr 21 10:12:25.369563 systemd[1]: Started cri-containerd-56ec0664bc2bd6372320c63a2ca3a48887aa0f08a1f4964d4af3f8aa706cb542.scope - libcontainer container 56ec0664bc2bd6372320c63a2ca3a48887aa0f08a1f4964d4af3f8aa706cb542. Apr 21 10:12:25.386242 systemd[1]: Started cri-containerd-9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285.scope - libcontainer container 9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285. Apr 21 10:12:25.410716 containerd[1505]: time="2026-04-21T10:12:25.410681992Z" level=info msg="StartContainer for \"8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6\" returns successfully" Apr 21 10:12:25.425797 containerd[1505]: time="2026-04-21T10:12:25.425509648Z" level=info msg="StartContainer for \"56ec0664bc2bd6372320c63a2ca3a48887aa0f08a1f4964d4af3f8aa706cb542\" returns successfully" Apr 21 10:12:25.456160 containerd[1505]: time="2026-04-21T10:12:25.456122891Z" level=info msg="StartContainer for \"9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285\" returns successfully" Apr 21 10:12:25.674689 kubelet[2194]: I0421 10:12:25.674655 2194 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.139196 kubelet[2194]: E0421 10:12:26.139114 2194 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5-d97ac59edd\" not found" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.141095 kubelet[2194]: E0421 10:12:26.141054 2194 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5-d97ac59edd\" not found" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.144371 kubelet[2194]: E0421 10:12:26.144344 2194 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5-d97ac59edd\" not found" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.268022 kubelet[2194]: E0421 10:12:26.267942 2194 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-5-d97ac59edd\" not found" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.322410 kubelet[2194]: I0421 10:12:26.322369 2194 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.322410 kubelet[2194]: E0421 10:12:26.322410 2194 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-5-d97ac59edd\": node \"ci-4081-3-7-5-d97ac59edd\" not found" Apr 21 10:12:26.332546 kubelet[2194]: E0421 10:12:26.332513 2194 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-5-d97ac59edd\" not found" Apr 21 10:12:26.433434 kubelet[2194]: E0421 10:12:26.433288 2194 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-5-d97ac59edd\" not found" Apr 21 10:12:26.601551 kubelet[2194]: I0421 10:12:26.601308 2194 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.607720 kubelet[2194]: E0421 10:12:26.607548 2194 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-5-d97ac59edd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.607720 kubelet[2194]: I0421 10:12:26.607573 2194 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.609682 kubelet[2194]: E0421 10:12:26.609625 2194 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-5-d97ac59edd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.609682 kubelet[2194]: I0421 10:12:26.609651 2194 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:26.611382 kubelet[2194]: E0421 10:12:26.611343 2194 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:27.078788 kubelet[2194]: I0421 10:12:27.078572 2194 apiserver.go:52] "Watching apiserver" Apr 21 10:12:27.099441 kubelet[2194]: I0421 10:12:27.099377 2194 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:12:27.146925 kubelet[2194]: I0421 10:12:27.146162 2194 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:27.146925 kubelet[2194]: I0421 10:12:27.146631 2194 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:27.149214 kubelet[2194]: E0421 10:12:27.148972 2194 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-5-d97ac59edd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:27.151624 kubelet[2194]: E0421 10:12:27.151570 2194 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-5-d97ac59edd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:28.147765 kubelet[2194]: I0421 10:12:28.147713 2194 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:28.316019 systemd[1]: Reloading requested from client PID 2476 ('systemctl') (unit session-7.scope)... Apr 21 10:12:28.316034 systemd[1]: Reloading... Apr 21 10:12:28.407847 zram_generator::config[2517]: No configuration found. Apr 21 10:12:28.506652 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:12:28.579939 systemd[1]: Reloading finished in 263 ms. Apr 21 10:12:28.620240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:12:28.642116 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:12:28.642354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:12:28.647058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:12:28.787109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:12:28.790574 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:12:28.830506 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:12:28.830506 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:12:28.831927 kubelet[2567]: I0421 10:12:28.830571 2567 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:12:28.837939 kubelet[2567]: I0421 10:12:28.837901 2567 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 10:12:28.837939 kubelet[2567]: I0421 10:12:28.837920 2567 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:12:28.837939 kubelet[2567]: I0421 10:12:28.837944 2567 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:12:28.838092 kubelet[2567]: I0421 10:12:28.837959 2567 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:12:28.838442 kubelet[2567]: I0421 10:12:28.838112 2567 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:12:28.839716 kubelet[2567]: I0421 10:12:28.839071 2567 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:12:28.840834 kubelet[2567]: I0421 10:12:28.840801 2567 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:12:28.843498 kubelet[2567]: E0421 10:12:28.843432 2567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:12:28.843687 kubelet[2567]: I0421 10:12:28.843570 2567 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:12:28.847107 kubelet[2567]: I0421 10:12:28.847081 2567 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:12:28.847298 kubelet[2567]: I0421 10:12:28.847265 2567 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:12:28.847393 kubelet[2567]: I0421 10:12:28.847282 2567 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-5-d97ac59edd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:12:28.847921 kubelet[2567]: I0421 10:12:28.847901 2567 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:12:28.847921 kubelet[2567]: I0421 10:12:28.847914 2567 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 10:12:28.847994 kubelet[2567]: I0421 10:12:28.847938 2567 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:12:28.848342 kubelet[2567]: I0421 10:12:28.848093 2567 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:12:28.848342 kubelet[2567]: I0421 10:12:28.848215 2567 kubelet.go:475] "Attempting to sync node with API server" Apr 21 10:12:28.848342 kubelet[2567]: I0421 10:12:28.848231 2567 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:12:28.848342 kubelet[2567]: I0421 10:12:28.848266 2567 kubelet.go:387] "Adding apiserver pod source" Apr 21 10:12:28.848342 kubelet[2567]: I0421 10:12:28.848275 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:12:28.853062 kubelet[2567]: I0421 10:12:28.852979 2567 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:12:28.853548 kubelet[2567]: I0421 10:12:28.853514 2567 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:12:28.853642 kubelet[2567]: I0421 10:12:28.853612 2567 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:12:28.857888 kubelet[2567]: I0421 10:12:28.857872 2567 server.go:1262] "Started kubelet" Apr 21 10:12:28.859375 kubelet[2567]: I0421 10:12:28.859242 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:12:28.871364 kubelet[2567]: E0421 10:12:28.871116 2567 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:12:28.871767 kubelet[2567]: I0421 10:12:28.871742 2567 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:12:28.872911 kubelet[2567]: I0421 10:12:28.872894 2567 server.go:310] "Adding debug handlers to kubelet server" Apr 21 10:12:28.873291 kubelet[2567]: I0421 10:12:28.873279 2567 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 10:12:28.876506 kubelet[2567]: I0421 10:12:28.876022 2567 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:12:28.876506 kubelet[2567]: I0421 10:12:28.876060 2567 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:12:28.876506 kubelet[2567]: I0421 10:12:28.876197 2567 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:12:28.876506 kubelet[2567]: I0421 10:12:28.876291 2567 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:12:28.876506 kubelet[2567]: I0421 10:12:28.876382 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:12:28.876506 kubelet[2567]: I0421 10:12:28.876391 2567 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:12:28.880522 kubelet[2567]: I0421 10:12:28.880282 2567 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:12:28.880522 kubelet[2567]: I0421 10:12:28.880388 2567 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:12:28.881446 kubelet[2567]: I0421 10:12:28.881410 2567 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:12:28.883452 kubelet[2567]: I0421 10:12:28.883418 2567 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:12:28.883635 kubelet[2567]: I0421 10:12:28.883548 2567 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 10:12:28.883635 kubelet[2567]: I0421 10:12:28.883568 2567 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 10:12:28.883719 kubelet[2567]: E0421 10:12:28.883602 2567 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:12:28.883788 kubelet[2567]: I0421 10:12:28.883761 2567 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:12:28.937596 kubelet[2567]: I0421 10:12:28.937574 2567 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:12:28.938017 kubelet[2567]: I0421 10:12:28.937938 2567 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:12:28.938017 kubelet[2567]: I0421 10:12:28.937965 2567 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:12:28.938341 kubelet[2567]: I0421 10:12:28.938312 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:12:28.938504 kubelet[2567]: I0421 10:12:28.938381 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:12:28.938504 kubelet[2567]: I0421 10:12:28.938398 2567 policy_none.go:49] "None policy: Start" Apr 21 10:12:28.938504 kubelet[2567]: I0421 10:12:28.938423 2567 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:12:28.938504 kubelet[2567]: I0421 10:12:28.938433 2567 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:12:28.938761 kubelet[2567]: I0421 10:12:28.938691 2567 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:12:28.938761 kubelet[2567]: I0421 10:12:28.938705 2567 policy_none.go:47] "Start" Apr 21 10:12:28.943697 kubelet[2567]: E0421 10:12:28.942985 2567 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:12:28.943697 kubelet[2567]: I0421 10:12:28.943163 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:12:28.943697 kubelet[2567]: I0421 10:12:28.943173 2567 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:12:28.944085 kubelet[2567]: I0421 10:12:28.944064 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:12:28.945368 kubelet[2567]: E0421 10:12:28.945347 2567 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:12:28.985063 kubelet[2567]: I0421 10:12:28.985026 2567 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:28.985615 kubelet[2567]: I0421 10:12:28.985501 2567 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:28.985685 kubelet[2567]: I0421 10:12:28.985648 2567 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:28.994021 kubelet[2567]: E0421 10:12:28.993971 2567 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-5-d97ac59edd\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.052112 kubelet[2567]: I0421 10:12:29.050602 2567 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.060192 kubelet[2567]: I0421 10:12:29.060125 2567 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.060447 kubelet[2567]: I0421 10:12:29.060222 2567 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.178298 kubelet[2567]: I0421 10:12:29.178196 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acd7cb6fee4c7b4fe575b5ad77bb45ef-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-5-d97ac59edd\" (UID: \"acd7cb6fee4c7b4fe575b5ad77bb45ef\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.178298 kubelet[2567]: I0421 10:12:29.178261 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.178504 kubelet[2567]: I0421 10:12:29.178298 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.178504 kubelet[2567]: I0421 10:12:29.178339 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.178504 kubelet[2567]: I0421 10:12:29.178369 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.178504 kubelet[2567]: I0421 10:12:29.178402 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/339e1aa52735046c653f729accffb501-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-5-d97ac59edd\" (UID: \"339e1aa52735046c653f729accffb501\") " pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.178504 kubelet[2567]: I0421 10:12:29.178425 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acd7cb6fee4c7b4fe575b5ad77bb45ef-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-5-d97ac59edd\" (UID: \"acd7cb6fee4c7b4fe575b5ad77bb45ef\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.178649 kubelet[2567]: I0421 10:12:29.178465 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acd7cb6fee4c7b4fe575b5ad77bb45ef-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-5-d97ac59edd\" (UID: \"acd7cb6fee4c7b4fe575b5ad77bb45ef\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.178649 kubelet[2567]: I0421 10:12:29.178499 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f1a436e98b5719facb489996e06aae0a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-5-d97ac59edd\" (UID: \"f1a436e98b5719facb489996e06aae0a\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.849833 kubelet[2567]: I0421 10:12:29.848869 2567 apiserver.go:52] "Watching apiserver" Apr 21 10:12:29.876452 kubelet[2567]: I0421 10:12:29.876406 2567 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:12:29.917648 kubelet[2567]: I0421 10:12:29.917620 2567 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.926417 kubelet[2567]: E0421 10:12:29.926210 2567 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-5-d97ac59edd\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" Apr 21 10:12:29.941848 kubelet[2567]: I0421 10:12:29.941692 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-5-d97ac59edd" podStartSLOduration=1.941675239 podStartE2EDuration="1.941675239s" podCreationTimestamp="2026-04-21 10:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:12:29.941470619 +0000 UTC m=+1.146161868" watchObservedRunningTime="2026-04-21 10:12:29.941675239 +0000 UTC m=+1.146366488" Apr 21 10:12:29.951577 kubelet[2567]: I0421 10:12:29.950241 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-5-d97ac59edd" podStartSLOduration=1.950226113 podStartE2EDuration="1.950226113s" podCreationTimestamp="2026-04-21 10:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:12:29.949943243 +0000 UTC m=+1.154634482" watchObservedRunningTime="2026-04-21 10:12:29.950226113 +0000 UTC m=+1.154917342" Apr 21 10:12:29.962738 kubelet[2567]: I0421 10:12:29.962676 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-5-d97ac59edd" podStartSLOduration=1.962661238 podStartE2EDuration="1.962661238s" podCreationTimestamp="2026-04-21 10:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:12:29.962584048 +0000 UTC m=+1.167275287" watchObservedRunningTime="2026-04-21 10:12:29.962661238 +0000 UTC m=+1.167352467" Apr 21 10:12:35.715899 kubelet[2567]: I0421 10:12:35.715787 2567 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:12:35.716733 containerd[1505]: time="2026-04-21T10:12:35.716488161Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:12:35.717447 kubelet[2567]: I0421 10:12:35.716766 2567 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:12:36.484280 systemd[1]: Created slice kubepods-besteffort-pod7d090ca7_8a24_4c28_837d_19789107e03b.slice - libcontainer container kubepods-besteffort-pod7d090ca7_8a24_4c28_837d_19789107e03b.slice. Apr 21 10:12:36.529381 kubelet[2567]: I0421 10:12:36.529141 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d090ca7-8a24-4c28-837d-19789107e03b-kube-proxy\") pod \"kube-proxy-tkzgr\" (UID: \"7d090ca7-8a24-4c28-837d-19789107e03b\") " pod="kube-system/kube-proxy-tkzgr" Apr 21 10:12:36.529381 kubelet[2567]: I0421 10:12:36.529200 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d090ca7-8a24-4c28-837d-19789107e03b-lib-modules\") pod \"kube-proxy-tkzgr\" (UID: \"7d090ca7-8a24-4c28-837d-19789107e03b\") " pod="kube-system/kube-proxy-tkzgr" Apr 21 10:12:36.529381 kubelet[2567]: I0421 10:12:36.529228 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d090ca7-8a24-4c28-837d-19789107e03b-xtables-lock\") pod \"kube-proxy-tkzgr\" (UID: \"7d090ca7-8a24-4c28-837d-19789107e03b\") " pod="kube-system/kube-proxy-tkzgr" Apr 21 10:12:36.529381 kubelet[2567]: I0421 10:12:36.529256 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gklgq\" (UniqueName: \"kubernetes.io/projected/7d090ca7-8a24-4c28-837d-19789107e03b-kube-api-access-gklgq\") pod \"kube-proxy-tkzgr\" (UID: \"7d090ca7-8a24-4c28-837d-19789107e03b\") " pod="kube-system/kube-proxy-tkzgr" Apr 21 10:12:36.638561 kubelet[2567]: E0421 10:12:36.638507 2567 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 21 10:12:36.638561 kubelet[2567]: E0421 10:12:36.638547 2567 projected.go:196] Error preparing data for projected volume kube-api-access-gklgq for pod kube-system/kube-proxy-tkzgr: configmap "kube-root-ca.crt" not found Apr 21 10:12:36.639276 kubelet[2567]: E0421 10:12:36.639127 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d090ca7-8a24-4c28-837d-19789107e03b-kube-api-access-gklgq podName:7d090ca7-8a24-4c28-837d-19789107e03b nodeName:}" failed. No retries permitted until 2026-04-21 10:12:37.138655835 +0000 UTC m=+8.343347114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gklgq" (UniqueName: "kubernetes.io/projected/7d090ca7-8a24-4c28-837d-19789107e03b-kube-api-access-gklgq") pod "kube-proxy-tkzgr" (UID: "7d090ca7-8a24-4c28-837d-19789107e03b") : configmap "kube-root-ca.crt" not found Apr 21 10:12:36.882522 systemd[1]: Created slice kubepods-besteffort-podff771875_d449_4eb1_8191_7ebf62566a15.slice - libcontainer container kubepods-besteffort-podff771875_d449_4eb1_8191_7ebf62566a15.slice. Apr 21 10:12:36.932899 kubelet[2567]: I0421 10:12:36.932692 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff771875-d449-4eb1-8191-7ebf62566a15-var-lib-calico\") pod \"tigera-operator-5588576f44-xsnhd\" (UID: \"ff771875-d449-4eb1-8191-7ebf62566a15\") " pod="tigera-operator/tigera-operator-5588576f44-xsnhd" Apr 21 10:12:36.932899 kubelet[2567]: I0421 10:12:36.932746 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62rd6\" (UniqueName: \"kubernetes.io/projected/ff771875-d449-4eb1-8191-7ebf62566a15-kube-api-access-62rd6\") pod \"tigera-operator-5588576f44-xsnhd\" (UID: \"ff771875-d449-4eb1-8191-7ebf62566a15\") " pod="tigera-operator/tigera-operator-5588576f44-xsnhd" Apr 21 10:12:37.189670 containerd[1505]: time="2026-04-21T10:12:37.189589754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-xsnhd,Uid:ff771875-d449-4eb1-8191-7ebf62566a15,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:12:37.235013 containerd[1505]: time="2026-04-21T10:12:37.234731784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:37.236265 containerd[1505]: time="2026-04-21T10:12:37.234808267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:37.236265 containerd[1505]: time="2026-04-21T10:12:37.235136739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:37.236265 containerd[1505]: time="2026-04-21T10:12:37.235295635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:37.271944 systemd[1]: Started cri-containerd-b6495645c4b05368ba8999d5e524ca9d5a83cb4c3cb139d37e733a80250abcc1.scope - libcontainer container b6495645c4b05368ba8999d5e524ca9d5a83cb4c3cb139d37e733a80250abcc1. Apr 21 10:12:37.313448 containerd[1505]: time="2026-04-21T10:12:37.313416793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-xsnhd,Uid:ff771875-d449-4eb1-8191-7ebf62566a15,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b6495645c4b05368ba8999d5e524ca9d5a83cb4c3cb139d37e733a80250abcc1\"" Apr 21 10:12:37.316633 containerd[1505]: time="2026-04-21T10:12:37.316585264Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:12:37.397562 containerd[1505]: time="2026-04-21T10:12:37.397422805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkzgr,Uid:7d090ca7-8a24-4c28-837d-19789107e03b,Namespace:kube-system,Attempt:0,}" Apr 21 10:12:37.430733 containerd[1505]: time="2026-04-21T10:12:37.430443735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:37.430733 containerd[1505]: time="2026-04-21T10:12:37.430608552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:37.430733 containerd[1505]: time="2026-04-21T10:12:37.430719326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:37.431196 containerd[1505]: time="2026-04-21T10:12:37.430912933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:37.457007 systemd[1]: Started cri-containerd-191be70cc402ca3ee4c0cc47630adf637a478564d6d5ad89b7e48ddf2594949e.scope - libcontainer container 191be70cc402ca3ee4c0cc47630adf637a478564d6d5ad89b7e48ddf2594949e. Apr 21 10:12:37.479512 containerd[1505]: time="2026-04-21T10:12:37.479279325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkzgr,Uid:7d090ca7-8a24-4c28-837d-19789107e03b,Namespace:kube-system,Attempt:0,} returns sandbox id \"191be70cc402ca3ee4c0cc47630adf637a478564d6d5ad89b7e48ddf2594949e\"" Apr 21 10:12:37.485517 containerd[1505]: time="2026-04-21T10:12:37.485487390Z" level=info msg="CreateContainer within sandbox \"191be70cc402ca3ee4c0cc47630adf637a478564d6d5ad89b7e48ddf2594949e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:12:37.502199 containerd[1505]: time="2026-04-21T10:12:37.502062748Z" level=info msg="CreateContainer within sandbox \"191be70cc402ca3ee4c0cc47630adf637a478564d6d5ad89b7e48ddf2594949e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"698254356847019d0d6d111967d4e6148e35e04682b58d634036529e54c052fb\"" Apr 21 10:12:37.503232 containerd[1505]: time="2026-04-21T10:12:37.503180750Z" level=info msg="StartContainer for \"698254356847019d0d6d111967d4e6148e35e04682b58d634036529e54c052fb\"" Apr 21 10:12:37.529466 systemd[1]: Started cri-containerd-698254356847019d0d6d111967d4e6148e35e04682b58d634036529e54c052fb.scope - libcontainer container 698254356847019d0d6d111967d4e6148e35e04682b58d634036529e54c052fb. Apr 21 10:12:37.560174 containerd[1505]: time="2026-04-21T10:12:37.560055534Z" level=info msg="StartContainer for \"698254356847019d0d6d111967d4e6148e35e04682b58d634036529e54c052fb\" returns successfully" Apr 21 10:12:37.958374 kubelet[2567]: I0421 10:12:37.958251 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tkzgr" podStartSLOduration=1.958231734 podStartE2EDuration="1.958231734s" podCreationTimestamp="2026-04-21 10:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:12:37.957675963 +0000 UTC m=+9.162367232" watchObservedRunningTime="2026-04-21 10:12:37.958231734 +0000 UTC m=+9.162923013" Apr 21 10:12:39.091736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3359234935.mount: Deactivated successfully. Apr 21 10:12:39.813672 containerd[1505]: time="2026-04-21T10:12:39.813621654Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:39.814963 containerd[1505]: time="2026-04-21T10:12:39.814787643Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:12:39.816149 containerd[1505]: time="2026-04-21T10:12:39.815926661Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:39.817892 containerd[1505]: time="2026-04-21T10:12:39.817873085Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:39.818344 containerd[1505]: time="2026-04-21T10:12:39.818321061Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.501698146s" Apr 21 10:12:39.818386 containerd[1505]: time="2026-04-21T10:12:39.818345122Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:12:39.823069 containerd[1505]: time="2026-04-21T10:12:39.823015117Z" level=info msg="CreateContainer within sandbox \"b6495645c4b05368ba8999d5e524ca9d5a83cb4c3cb139d37e733a80250abcc1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:12:39.834852 containerd[1505]: time="2026-04-21T10:12:39.834788329Z" level=info msg="CreateContainer within sandbox \"b6495645c4b05368ba8999d5e524ca9d5a83cb4c3cb139d37e733a80250abcc1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75\"" Apr 21 10:12:39.835448 containerd[1505]: time="2026-04-21T10:12:39.835424901Z" level=info msg="StartContainer for \"48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75\"" Apr 21 10:12:39.864970 systemd[1]: Started cri-containerd-48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75.scope - libcontainer container 48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75. Apr 21 10:12:39.887740 containerd[1505]: time="2026-04-21T10:12:39.887704353Z" level=info msg="StartContainer for \"48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75\" returns successfully" Apr 21 10:12:39.960367 kubelet[2567]: I0421 10:12:39.960202 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-xsnhd" podStartSLOduration=1.456570526 podStartE2EDuration="3.96018911s" podCreationTimestamp="2026-04-21 10:12:36 +0000 UTC" firstStartedPulling="2026-04-21 10:12:37.315788303 +0000 UTC m=+8.520479572" lastFinishedPulling="2026-04-21 10:12:39.819406917 +0000 UTC m=+11.024098156" observedRunningTime="2026-04-21 10:12:39.960126688 +0000 UTC m=+11.164817927" watchObservedRunningTime="2026-04-21 10:12:39.96018911 +0000 UTC m=+11.164880349" Apr 21 10:12:45.041349 sudo[1706]: pam_unix(sudo:session): session closed for user root Apr 21 10:12:45.073050 sshd[1703]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:45.076241 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:12:45.078607 systemd[1]: sshd@6-46.62.167.148:22-50.85.169.122:38714.service: Deactivated successfully. Apr 21 10:12:45.081512 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:12:45.081918 systemd[1]: session-7.scope: Consumed 3.903s CPU time, 158.9M memory peak, 0B memory swap peak. Apr 21 10:12:45.082725 systemd-logind[1496]: Removed session 7. Apr 21 10:12:47.029766 systemd[1]: Created slice kubepods-besteffort-pod49cbc1f1_3c1b_4126_abf1_0fe5f8661502.slice - libcontainer container kubepods-besteffort-pod49cbc1f1_3c1b_4126_abf1_0fe5f8661502.slice. Apr 21 10:12:47.097890 systemd[1]: Created slice kubepods-besteffort-podacfeffe5_ea07_4121_aaf0_93e8cd3a67cd.slice - libcontainer container kubepods-besteffort-podacfeffe5_ea07_4121_aaf0_93e8cd3a67cd.slice. Apr 21 10:12:47.104037 kubelet[2567]: I0421 10:12:47.103408 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/49cbc1f1-3c1b-4126-abf1-0fe5f8661502-typha-certs\") pod \"calico-typha-d5bcb5fdc-xz2b4\" (UID: \"49cbc1f1-3c1b-4126-abf1-0fe5f8661502\") " pod="calico-system/calico-typha-d5bcb5fdc-xz2b4" Apr 21 10:12:47.104037 kubelet[2567]: I0421 10:12:47.103435 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49cbc1f1-3c1b-4126-abf1-0fe5f8661502-tigera-ca-bundle\") pod \"calico-typha-d5bcb5fdc-xz2b4\" (UID: \"49cbc1f1-3c1b-4126-abf1-0fe5f8661502\") " pod="calico-system/calico-typha-d5bcb5fdc-xz2b4" Apr 21 10:12:47.104037 kubelet[2567]: I0421 10:12:47.103447 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89lls\" (UniqueName: \"kubernetes.io/projected/49cbc1f1-3c1b-4126-abf1-0fe5f8661502-kube-api-access-89lls\") pod \"calico-typha-d5bcb5fdc-xz2b4\" (UID: \"49cbc1f1-3c1b-4126-abf1-0fe5f8661502\") " pod="calico-system/calico-typha-d5bcb5fdc-xz2b4" Apr 21 10:12:47.206150 kubelet[2567]: I0421 10:12:47.204457 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-cni-net-dir\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206150 kubelet[2567]: I0421 10:12:47.204500 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-node-certs\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206150 kubelet[2567]: I0421 10:12:47.204515 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-nodeproc\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206150 kubelet[2567]: I0421 10:12:47.204547 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-bpffs\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206150 kubelet[2567]: I0421 10:12:47.204563 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-policysync\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206361 kubelet[2567]: I0421 10:12:47.204581 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7l2b\" (UniqueName: \"kubernetes.io/projected/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-kube-api-access-f7l2b\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206361 kubelet[2567]: I0421 10:12:47.204608 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-cni-bin-dir\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206361 kubelet[2567]: I0421 10:12:47.204623 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-flexvol-driver-host\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206361 kubelet[2567]: I0421 10:12:47.204637 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-var-lib-calico\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206361 kubelet[2567]: I0421 10:12:47.204653 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-cni-log-dir\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206486 kubelet[2567]: I0421 10:12:47.204666 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-lib-modules\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206486 kubelet[2567]: I0421 10:12:47.204680 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-sys-fs\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206486 kubelet[2567]: I0421 10:12:47.204694 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-tigera-ca-bundle\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206486 kubelet[2567]: I0421 10:12:47.204711 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-var-run-calico\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.206486 kubelet[2567]: I0421 10:12:47.204738 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acfeffe5-ea07-4121-aaf0-93e8cd3a67cd-xtables-lock\") pod \"calico-node-f26lg\" (UID: \"acfeffe5-ea07-4121-aaf0-93e8cd3a67cd\") " pod="calico-system/calico-node-f26lg" Apr 21 10:12:47.224941 kubelet[2567]: E0421 10:12:47.224590 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9tq" podUID="45c6765d-bd67-4624-ba22-eae28e77978f" Apr 21 10:12:47.275444 update_engine[1497]: I20260421 10:12:47.275357 1497 update_attempter.cc:509] Updating boot flags... Apr 21 10:12:47.305923 kubelet[2567]: I0421 10:12:47.305657 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/45c6765d-bd67-4624-ba22-eae28e77978f-registration-dir\") pod \"csi-node-driver-jm9tq\" (UID: \"45c6765d-bd67-4624-ba22-eae28e77978f\") " pod="calico-system/csi-node-driver-jm9tq" Apr 21 10:12:47.305923 kubelet[2567]: I0421 10:12:47.305721 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/45c6765d-bd67-4624-ba22-eae28e77978f-varrun\") pod \"csi-node-driver-jm9tq\" (UID: \"45c6765d-bd67-4624-ba22-eae28e77978f\") " pod="calico-system/csi-node-driver-jm9tq" Apr 21 10:12:47.305923 kubelet[2567]: I0421 10:12:47.305748 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/45c6765d-bd67-4624-ba22-eae28e77978f-socket-dir\") pod \"csi-node-driver-jm9tq\" (UID: \"45c6765d-bd67-4624-ba22-eae28e77978f\") " pod="calico-system/csi-node-driver-jm9tq" Apr 21 10:12:47.305923 kubelet[2567]: I0421 10:12:47.305761 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45c6765d-bd67-4624-ba22-eae28e77978f-kubelet-dir\") pod \"csi-node-driver-jm9tq\" (UID: \"45c6765d-bd67-4624-ba22-eae28e77978f\") " pod="calico-system/csi-node-driver-jm9tq" Apr 21 10:12:47.305923 kubelet[2567]: I0421 10:12:47.305772 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsbcd\" (UniqueName: \"kubernetes.io/projected/45c6765d-bd67-4624-ba22-eae28e77978f-kube-api-access-gsbcd\") pod \"csi-node-driver-jm9tq\" (UID: \"45c6765d-bd67-4624-ba22-eae28e77978f\") " pod="calico-system/csi-node-driver-jm9tq" Apr 21 10:12:47.314352 kubelet[2567]: E0421 10:12:47.313994 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.314352 kubelet[2567]: W0421 10:12:47.314195 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.314352 kubelet[2567]: E0421 10:12:47.314212 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.320863 kubelet[2567]: E0421 10:12:47.320233 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.320863 kubelet[2567]: W0421 10:12:47.320247 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.320863 kubelet[2567]: E0421 10:12:47.320262 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.323007 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2979) Apr 21 10:12:47.327018 kubelet[2567]: E0421 10:12:47.327002 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.327170 kubelet[2567]: W0421 10:12:47.327117 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.327170 kubelet[2567]: E0421 10:12:47.327136 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.336314 containerd[1505]: time="2026-04-21T10:12:47.335963609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d5bcb5fdc-xz2b4,Uid:49cbc1f1-3c1b-4126-abf1-0fe5f8661502,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:47.383315 containerd[1505]: time="2026-04-21T10:12:47.383246278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:47.384173 containerd[1505]: time="2026-04-21T10:12:47.383896270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:47.384173 containerd[1505]: time="2026-04-21T10:12:47.383909381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:47.384173 containerd[1505]: time="2026-04-21T10:12:47.383973692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:47.397985 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2980) Apr 21 10:12:47.408273 kubelet[2567]: E0421 10:12:47.408079 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.408273 kubelet[2567]: W0421 10:12:47.408095 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.408273 kubelet[2567]: E0421 10:12:47.408163 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.410309 systemd[1]: Started cri-containerd-268b4cfd494cfdd672ee70091d04dd6d7d4fad42244b7c560dd1d88a5cc2f2b3.scope - libcontainer container 268b4cfd494cfdd672ee70091d04dd6d7d4fad42244b7c560dd1d88a5cc2f2b3. Apr 21 10:12:47.413352 containerd[1505]: time="2026-04-21T10:12:47.413315711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f26lg,Uid:acfeffe5-ea07-4121-aaf0-93e8cd3a67cd,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:47.415905 kubelet[2567]: E0421 10:12:47.415887 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.415905 kubelet[2567]: W0421 10:12:47.415903 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.416018 kubelet[2567]: E0421 10:12:47.415919 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.418357 kubelet[2567]: E0421 10:12:47.418341 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.418357 kubelet[2567]: W0421 10:12:47.418354 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.418487 kubelet[2567]: E0421 10:12:47.418367 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.418737 kubelet[2567]: E0421 10:12:47.418656 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.418737 kubelet[2567]: W0421 10:12:47.418665 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.418737 kubelet[2567]: E0421 10:12:47.418673 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.419955 kubelet[2567]: E0421 10:12:47.419906 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.419955 kubelet[2567]: W0421 10:12:47.419916 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.419955 kubelet[2567]: E0421 10:12:47.419927 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.420838 kubelet[2567]: E0421 10:12:47.420458 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.420838 kubelet[2567]: W0421 10:12:47.420467 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.420838 kubelet[2567]: E0421 10:12:47.420475 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.424406 kubelet[2567]: E0421 10:12:47.424387 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.424443 kubelet[2567]: W0421 10:12:47.424401 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.424443 kubelet[2567]: E0421 10:12:47.424419 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.424692 kubelet[2567]: E0421 10:12:47.424676 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.424692 kubelet[2567]: W0421 10:12:47.424687 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.424752 kubelet[2567]: E0421 10:12:47.424695 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.426152 kubelet[2567]: E0421 10:12:47.426136 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.426152 kubelet[2567]: W0421 10:12:47.426149 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.426209 kubelet[2567]: E0421 10:12:47.426158 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.426545 kubelet[2567]: E0421 10:12:47.426466 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.426545 kubelet[2567]: W0421 10:12:47.426474 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.426545 kubelet[2567]: E0421 10:12:47.426481 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.427966 kubelet[2567]: E0421 10:12:47.427680 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.427966 kubelet[2567]: W0421 10:12:47.427928 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.427966 kubelet[2567]: E0421 10:12:47.427940 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.428880 kubelet[2567]: E0421 10:12:47.428695 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.428941 kubelet[2567]: W0421 10:12:47.428882 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.428941 kubelet[2567]: E0421 10:12:47.428892 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.429742 kubelet[2567]: E0421 10:12:47.429301 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.429742 kubelet[2567]: W0421 10:12:47.429309 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.429742 kubelet[2567]: E0421 10:12:47.429317 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.429875 kubelet[2567]: E0421 10:12:47.429868 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.429897 kubelet[2567]: W0421 10:12:47.429875 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.429897 kubelet[2567]: E0421 10:12:47.429882 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.433637 kubelet[2567]: E0421 10:12:47.433552 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.433637 kubelet[2567]: W0421 10:12:47.433564 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.433637 kubelet[2567]: E0421 10:12:47.433577 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.439863 kubelet[2567]: E0421 10:12:47.438916 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.439863 kubelet[2567]: W0421 10:12:47.438927 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.439863 kubelet[2567]: E0421 10:12:47.438938 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.441186 kubelet[2567]: E0421 10:12:47.441174 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.442182 kubelet[2567]: W0421 10:12:47.441836 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.442182 kubelet[2567]: E0421 10:12:47.441852 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.442580 kubelet[2567]: E0421 10:12:47.442570 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.442628 kubelet[2567]: W0421 10:12:47.442620 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.442669 kubelet[2567]: E0421 10:12:47.442662 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.443944 kubelet[2567]: E0421 10:12:47.443933 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.444087 kubelet[2567]: W0421 10:12:47.443992 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.444087 kubelet[2567]: E0421 10:12:47.444002 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.444287 kubelet[2567]: E0421 10:12:47.444277 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.444358 kubelet[2567]: W0421 10:12:47.444329 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.444358 kubelet[2567]: E0421 10:12:47.444340 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.444704 kubelet[2567]: E0421 10:12:47.444588 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.444704 kubelet[2567]: W0421 10:12:47.444596 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.444704 kubelet[2567]: E0421 10:12:47.444604 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.444997 kubelet[2567]: E0421 10:12:47.444989 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.445046 kubelet[2567]: W0421 10:12:47.445038 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.445098 kubelet[2567]: E0421 10:12:47.445090 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.445288 kubelet[2567]: E0421 10:12:47.445280 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.445395 kubelet[2567]: W0421 10:12:47.445385 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.445446 kubelet[2567]: E0421 10:12:47.445436 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.445740 kubelet[2567]: E0421 10:12:47.445722 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.445806 kubelet[2567]: W0421 10:12:47.445798 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.448206 kubelet[2567]: E0421 10:12:47.448180 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.452428 kubelet[2567]: E0421 10:12:47.451651 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.452428 kubelet[2567]: W0421 10:12:47.451661 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.452428 kubelet[2567]: E0421 10:12:47.451671 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.471506 kubelet[2567]: E0421 10:12:47.471486 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:47.471647 kubelet[2567]: W0421 10:12:47.471637 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:47.471728 kubelet[2567]: E0421 10:12:47.471718 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:47.488728 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2980) Apr 21 10:12:47.514309 containerd[1505]: time="2026-04-21T10:12:47.512283056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:47.514309 containerd[1505]: time="2026-04-21T10:12:47.512326177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:47.514309 containerd[1505]: time="2026-04-21T10:12:47.512350648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:47.514309 containerd[1505]: time="2026-04-21T10:12:47.512417929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:47.542321 containerd[1505]: time="2026-04-21T10:12:47.542253287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d5bcb5fdc-xz2b4,Uid:49cbc1f1-3c1b-4126-abf1-0fe5f8661502,Namespace:calico-system,Attempt:0,} returns sandbox id \"268b4cfd494cfdd672ee70091d04dd6d7d4fad42244b7c560dd1d88a5cc2f2b3\"" Apr 21 10:12:47.544482 containerd[1505]: time="2026-04-21T10:12:47.544462551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:12:47.549939 systemd[1]: Started cri-containerd-611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e.scope - libcontainer container 611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e. Apr 21 10:12:47.572841 containerd[1505]: time="2026-04-21T10:12:47.571954953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f26lg,Uid:acfeffe5-ea07-4121-aaf0-93e8cd3a67cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e\"" Apr 21 10:12:48.885582 kubelet[2567]: E0421 10:12:48.884653 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9tq" podUID="45c6765d-bd67-4624-ba22-eae28e77978f" Apr 21 10:12:49.313055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2959214511.mount: Deactivated successfully. Apr 21 10:12:49.691641 containerd[1505]: time="2026-04-21T10:12:49.691573324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:49.692949 containerd[1505]: time="2026-04-21T10:12:49.692823585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:12:49.694177 containerd[1505]: time="2026-04-21T10:12:49.694151219Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:49.696164 containerd[1505]: time="2026-04-21T10:12:49.695937280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:49.696597 containerd[1505]: time="2026-04-21T10:12:49.696463800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.150796944s" Apr 21 10:12:49.696597 containerd[1505]: time="2026-04-21T10:12:49.696497101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:12:49.699524 containerd[1505]: time="2026-04-21T10:12:49.699352911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:12:49.710311 containerd[1505]: time="2026-04-21T10:12:49.710274614Z" level=info msg="CreateContainer within sandbox \"268b4cfd494cfdd672ee70091d04dd6d7d4fad42244b7c560dd1d88a5cc2f2b3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:12:49.723555 containerd[1505]: time="2026-04-21T10:12:49.723516678Z" level=info msg="CreateContainer within sandbox \"268b4cfd494cfdd672ee70091d04dd6d7d4fad42244b7c560dd1d88a5cc2f2b3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d5938af835c5cb6976f530341b46f92cb83aee1cf5cc87b0546047d411b7a790\"" Apr 21 10:12:49.724021 containerd[1505]: time="2026-04-21T10:12:49.723985916Z" level=info msg="StartContainer for \"d5938af835c5cb6976f530341b46f92cb83aee1cf5cc87b0546047d411b7a790\"" Apr 21 10:12:49.744927 systemd[1]: Started cri-containerd-d5938af835c5cb6976f530341b46f92cb83aee1cf5cc87b0546047d411b7a790.scope - libcontainer container d5938af835c5cb6976f530341b46f92cb83aee1cf5cc87b0546047d411b7a790. Apr 21 10:12:49.779231 containerd[1505]: time="2026-04-21T10:12:49.779168342Z" level=info msg="StartContainer for \"d5938af835c5cb6976f530341b46f92cb83aee1cf5cc87b0546047d411b7a790\" returns successfully" Apr 21 10:12:49.987802 kubelet[2567]: I0421 10:12:49.987527 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d5bcb5fdc-xz2b4" podStartSLOduration=0.834482789 podStartE2EDuration="2.987514656s" podCreationTimestamp="2026-04-21 10:12:47 +0000 UTC" firstStartedPulling="2026-04-21 10:12:47.544142255 +0000 UTC m=+18.748833484" lastFinishedPulling="2026-04-21 10:12:49.697174122 +0000 UTC m=+20.901865351" observedRunningTime="2026-04-21 10:12:49.987107759 +0000 UTC m=+21.191798988" watchObservedRunningTime="2026-04-21 10:12:49.987514656 +0000 UTC m=+21.192205885" Apr 21 10:12:50.007940 kubelet[2567]: E0421 10:12:50.007712 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.007940 kubelet[2567]: W0421 10:12:50.007749 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.007940 kubelet[2567]: E0421 10:12:50.007769 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.008187 kubelet[2567]: E0421 10:12:50.008158 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.008187 kubelet[2567]: W0421 10:12:50.008175 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.008187 kubelet[2567]: E0421 10:12:50.008187 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.008523 kubelet[2567]: E0421 10:12:50.008504 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.008523 kubelet[2567]: W0421 10:12:50.008517 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.008567 kubelet[2567]: E0421 10:12:50.008526 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.008807 kubelet[2567]: E0421 10:12:50.008797 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.008852 kubelet[2567]: W0421 10:12:50.008846 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.008870 kubelet[2567]: E0421 10:12:50.008855 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.009104 kubelet[2567]: E0421 10:12:50.009087 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.009104 kubelet[2567]: W0421 10:12:50.009104 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.009155 kubelet[2567]: E0421 10:12:50.009112 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.009347 kubelet[2567]: E0421 10:12:50.009330 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.009347 kubelet[2567]: W0421 10:12:50.009340 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.009347 kubelet[2567]: E0421 10:12:50.009347 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.009783 kubelet[2567]: E0421 10:12:50.009764 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.009783 kubelet[2567]: W0421 10:12:50.009783 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.009842 kubelet[2567]: E0421 10:12:50.009791 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.010042 kubelet[2567]: E0421 10:12:50.010026 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.010042 kubelet[2567]: W0421 10:12:50.010036 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.010042 kubelet[2567]: E0421 10:12:50.010043 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.010273 kubelet[2567]: E0421 10:12:50.010255 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.010273 kubelet[2567]: W0421 10:12:50.010267 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.010353 kubelet[2567]: E0421 10:12:50.010274 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.010529 kubelet[2567]: E0421 10:12:50.010513 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.010529 kubelet[2567]: W0421 10:12:50.010524 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.010570 kubelet[2567]: E0421 10:12:50.010531 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.010766 kubelet[2567]: E0421 10:12:50.010750 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.010766 kubelet[2567]: W0421 10:12:50.010760 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.010766 kubelet[2567]: E0421 10:12:50.010766 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.011034 kubelet[2567]: E0421 10:12:50.011005 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.011034 kubelet[2567]: W0421 10:12:50.011015 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.011034 kubelet[2567]: E0421 10:12:50.011022 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.011390 kubelet[2567]: E0421 10:12:50.011362 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.011417 kubelet[2567]: W0421 10:12:50.011389 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.011437 kubelet[2567]: E0421 10:12:50.011414 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.011745 kubelet[2567]: E0421 10:12:50.011723 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.011745 kubelet[2567]: W0421 10:12:50.011735 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.011745 kubelet[2567]: E0421 10:12:50.011742 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.012060 kubelet[2567]: E0421 10:12:50.012041 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.012060 kubelet[2567]: W0421 10:12:50.012052 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.012060 kubelet[2567]: E0421 10:12:50.012059 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.048971 kubelet[2567]: E0421 10:12:50.048918 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.048971 kubelet[2567]: W0421 10:12:50.048955 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.049119 kubelet[2567]: E0421 10:12:50.048987 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.049889 kubelet[2567]: E0421 10:12:50.049688 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.049889 kubelet[2567]: W0421 10:12:50.049717 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.049889 kubelet[2567]: E0421 10:12:50.049744 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.050494 kubelet[2567]: E0421 10:12:50.050449 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.050494 kubelet[2567]: W0421 10:12:50.050484 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.050759 kubelet[2567]: E0421 10:12:50.050515 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.051257 kubelet[2567]: E0421 10:12:50.051220 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.051257 kubelet[2567]: W0421 10:12:50.051255 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.051497 kubelet[2567]: E0421 10:12:50.051278 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.051947 kubelet[2567]: E0421 10:12:50.051906 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.051947 kubelet[2567]: W0421 10:12:50.051934 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.051947 kubelet[2567]: E0421 10:12:50.051958 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.052501 kubelet[2567]: E0421 10:12:50.052449 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.052501 kubelet[2567]: W0421 10:12:50.052485 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.052501 kubelet[2567]: E0421 10:12:50.052511 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.053228 kubelet[2567]: E0421 10:12:50.053174 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.053228 kubelet[2567]: W0421 10:12:50.053196 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.053228 kubelet[2567]: E0421 10:12:50.053213 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.054098 kubelet[2567]: E0421 10:12:50.053810 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.054098 kubelet[2567]: W0421 10:12:50.053863 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.054098 kubelet[2567]: E0421 10:12:50.053886 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.055490 kubelet[2567]: E0421 10:12:50.055217 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.055490 kubelet[2567]: W0421 10:12:50.055240 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.055490 kubelet[2567]: E0421 10:12:50.055294 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.055869 kubelet[2567]: E0421 10:12:50.055798 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.055869 kubelet[2567]: W0421 10:12:50.055858 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.056162 kubelet[2567]: E0421 10:12:50.055876 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.056370 kubelet[2567]: E0421 10:12:50.056319 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.056370 kubelet[2567]: W0421 10:12:50.056339 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.056370 kubelet[2567]: E0421 10:12:50.056354 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.056798 kubelet[2567]: E0421 10:12:50.056777 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.056798 kubelet[2567]: W0421 10:12:50.056795 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.057051 kubelet[2567]: E0421 10:12:50.056810 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.057388 kubelet[2567]: E0421 10:12:50.057342 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.057388 kubelet[2567]: W0421 10:12:50.057371 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.057388 kubelet[2567]: E0421 10:12:50.057391 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.058044 kubelet[2567]: E0421 10:12:50.058013 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.058044 kubelet[2567]: W0421 10:12:50.058036 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.058236 kubelet[2567]: E0421 10:12:50.058054 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.058889 kubelet[2567]: E0421 10:12:50.058700 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.058889 kubelet[2567]: W0421 10:12:50.058731 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.058889 kubelet[2567]: E0421 10:12:50.058757 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.060401 kubelet[2567]: E0421 10:12:50.059881 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.060401 kubelet[2567]: W0421 10:12:50.059909 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.060401 kubelet[2567]: E0421 10:12:50.059934 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.061100 kubelet[2567]: E0421 10:12:50.060693 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.061100 kubelet[2567]: W0421 10:12:50.060715 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.061100 kubelet[2567]: E0421 10:12:50.060734 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.061655 kubelet[2567]: E0421 10:12:50.061387 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:50.061655 kubelet[2567]: W0421 10:12:50.061413 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:50.061655 kubelet[2567]: E0421 10:12:50.061428 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:50.884560 kubelet[2567]: E0421 10:12:50.884514 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9tq" podUID="45c6765d-bd67-4624-ba22-eae28e77978f" Apr 21 10:12:50.979769 kubelet[2567]: I0421 10:12:50.979729 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:12:51.019295 kubelet[2567]: E0421 10:12:51.019098 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.019295 kubelet[2567]: W0421 10:12:51.019127 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.019295 kubelet[2567]: E0421 10:12:51.019152 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.019923 kubelet[2567]: E0421 10:12:51.019890 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.019923 kubelet[2567]: W0421 10:12:51.019909 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.020057 kubelet[2567]: E0421 10:12:51.019927 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.020226 kubelet[2567]: E0421 10:12:51.020208 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.020226 kubelet[2567]: W0421 10:12:51.020221 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.020301 kubelet[2567]: E0421 10:12:51.020239 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.020484 kubelet[2567]: E0421 10:12:51.020469 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.020484 kubelet[2567]: W0421 10:12:51.020479 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.020484 kubelet[2567]: E0421 10:12:51.020489 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.020759 kubelet[2567]: E0421 10:12:51.020743 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.020759 kubelet[2567]: W0421 10:12:51.020755 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.020850 kubelet[2567]: E0421 10:12:51.020765 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.021041 kubelet[2567]: E0421 10:12:51.021023 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.021041 kubelet[2567]: W0421 10:12:51.021036 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.021138 kubelet[2567]: E0421 10:12:51.021047 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.021323 kubelet[2567]: E0421 10:12:51.021304 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.021323 kubelet[2567]: W0421 10:12:51.021315 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.021323 kubelet[2567]: E0421 10:12:51.021323 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.021592 kubelet[2567]: E0421 10:12:51.021572 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.021592 kubelet[2567]: W0421 10:12:51.021586 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.021678 kubelet[2567]: E0421 10:12:51.021597 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.021880 kubelet[2567]: E0421 10:12:51.021857 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.021880 kubelet[2567]: W0421 10:12:51.021871 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.021880 kubelet[2567]: E0421 10:12:51.021881 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.022147 kubelet[2567]: E0421 10:12:51.022129 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.022147 kubelet[2567]: W0421 10:12:51.022142 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.022249 kubelet[2567]: E0421 10:12:51.022154 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.022489 kubelet[2567]: E0421 10:12:51.022397 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.022489 kubelet[2567]: W0421 10:12:51.022407 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.022489 kubelet[2567]: E0421 10:12:51.022416 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.022740 kubelet[2567]: E0421 10:12:51.022719 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.022787 kubelet[2567]: W0421 10:12:51.022765 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.022787 kubelet[2567]: E0421 10:12:51.022775 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.023053 kubelet[2567]: E0421 10:12:51.023038 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.023053 kubelet[2567]: W0421 10:12:51.023048 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.023103 kubelet[2567]: E0421 10:12:51.023056 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.023297 kubelet[2567]: E0421 10:12:51.023279 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.023297 kubelet[2567]: W0421 10:12:51.023292 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.023378 kubelet[2567]: E0421 10:12:51.023304 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.023553 kubelet[2567]: E0421 10:12:51.023535 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.023553 kubelet[2567]: W0421 10:12:51.023548 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.023619 kubelet[2567]: E0421 10:12:51.023557 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.059516 kubelet[2567]: E0421 10:12:51.059450 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.059516 kubelet[2567]: W0421 10:12:51.059471 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.059516 kubelet[2567]: E0421 10:12:51.059489 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.060010 kubelet[2567]: E0421 10:12:51.059962 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.060010 kubelet[2567]: W0421 10:12:51.059979 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.060010 kubelet[2567]: E0421 10:12:51.059993 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.060297 kubelet[2567]: E0421 10:12:51.060285 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.060297 kubelet[2567]: W0421 10:12:51.060295 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.060297 kubelet[2567]: E0421 10:12:51.060304 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.060590 kubelet[2567]: E0421 10:12:51.060576 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.060590 kubelet[2567]: W0421 10:12:51.060587 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.060590 kubelet[2567]: E0421 10:12:51.060594 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.060901 kubelet[2567]: E0421 10:12:51.060880 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.060901 kubelet[2567]: W0421 10:12:51.060891 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.060901 kubelet[2567]: E0421 10:12:51.060899 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.061243 kubelet[2567]: E0421 10:12:51.061221 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.061243 kubelet[2567]: W0421 10:12:51.061233 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.061243 kubelet[2567]: E0421 10:12:51.061240 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.061601 kubelet[2567]: E0421 10:12:51.061459 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.061601 kubelet[2567]: W0421 10:12:51.061466 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.061601 kubelet[2567]: E0421 10:12:51.061474 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.061898 kubelet[2567]: E0421 10:12:51.061684 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.061898 kubelet[2567]: W0421 10:12:51.061691 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.061898 kubelet[2567]: E0421 10:12:51.061699 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.062045 kubelet[2567]: E0421 10:12:51.061936 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.062045 kubelet[2567]: W0421 10:12:51.061945 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.062045 kubelet[2567]: E0421 10:12:51.061955 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.062206 kubelet[2567]: E0421 10:12:51.062190 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.062206 kubelet[2567]: W0421 10:12:51.062203 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.062347 kubelet[2567]: E0421 10:12:51.062210 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.062498 kubelet[2567]: E0421 10:12:51.062460 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.062498 kubelet[2567]: W0421 10:12:51.062468 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.062498 kubelet[2567]: E0421 10:12:51.062475 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.063154 kubelet[2567]: E0421 10:12:51.062875 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.063154 kubelet[2567]: W0421 10:12:51.062888 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.063154 kubelet[2567]: E0421 10:12:51.062896 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.063602 kubelet[2567]: E0421 10:12:51.063547 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.063602 kubelet[2567]: W0421 10:12:51.063582 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.063602 kubelet[2567]: E0421 10:12:51.063593 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.064121 kubelet[2567]: E0421 10:12:51.064101 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.064121 kubelet[2567]: W0421 10:12:51.064111 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.064121 kubelet[2567]: E0421 10:12:51.064118 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.064467 kubelet[2567]: E0421 10:12:51.064435 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.064467 kubelet[2567]: W0421 10:12:51.064462 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.064523 kubelet[2567]: E0421 10:12:51.064486 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.064959 kubelet[2567]: E0421 10:12:51.064924 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.064959 kubelet[2567]: W0421 10:12:51.064946 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.065030 kubelet[2567]: E0421 10:12:51.064963 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.065603 kubelet[2567]: E0421 10:12:51.065532 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.065603 kubelet[2567]: W0421 10:12:51.065544 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.065603 kubelet[2567]: E0421 10:12:51.065555 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.066681 kubelet[2567]: E0421 10:12:51.066659 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:51.066723 kubelet[2567]: W0421 10:12:51.066682 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:51.066723 kubelet[2567]: E0421 10:12:51.066698 2567 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:51.391138 containerd[1505]: time="2026-04-21T10:12:51.391082518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:51.392287 containerd[1505]: time="2026-04-21T10:12:51.392138265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:12:51.393403 containerd[1505]: time="2026-04-21T10:12:51.393075950Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:51.395034 containerd[1505]: time="2026-04-21T10:12:51.394974259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:51.396253 containerd[1505]: time="2026-04-21T10:12:51.395453877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.696068085s" Apr 21 10:12:51.396253 containerd[1505]: time="2026-04-21T10:12:51.395487897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:12:51.400391 containerd[1505]: time="2026-04-21T10:12:51.400351953Z" level=info msg="CreateContainer within sandbox \"611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:12:51.416342 containerd[1505]: time="2026-04-21T10:12:51.416294842Z" level=info msg="CreateContainer within sandbox \"611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544\"" Apr 21 10:12:51.417077 containerd[1505]: time="2026-04-21T10:12:51.417046693Z" level=info msg="StartContainer for \"2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544\"" Apr 21 10:12:51.442306 systemd[1]: run-containerd-runc-k8s.io-2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544-runc.xEV4lw.mount: Deactivated successfully. Apr 21 10:12:51.453964 systemd[1]: Started cri-containerd-2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544.scope - libcontainer container 2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544. Apr 21 10:12:51.481672 containerd[1505]: time="2026-04-21T10:12:51.481627070Z" level=info msg="StartContainer for \"2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544\" returns successfully" Apr 21 10:12:51.495591 systemd[1]: cri-containerd-2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544.scope: Deactivated successfully. Apr 21 10:12:51.523554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544-rootfs.mount: Deactivated successfully. Apr 21 10:12:51.643580 containerd[1505]: time="2026-04-21T10:12:51.643421983Z" level=info msg="shim disconnected" id=2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544 namespace=k8s.io Apr 21 10:12:51.643580 containerd[1505]: time="2026-04-21T10:12:51.643491964Z" level=warning msg="cleaning up after shim disconnected" id=2e9ae7348ecd8d3826f76d70213da51d01a2af1280a9c0a32608180433b45544 namespace=k8s.io Apr 21 10:12:51.643580 containerd[1505]: time="2026-04-21T10:12:51.643502824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:51.987234 containerd[1505]: time="2026-04-21T10:12:51.987101362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:12:52.886498 kubelet[2567]: E0421 10:12:52.886454 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9tq" podUID="45c6765d-bd67-4624-ba22-eae28e77978f" Apr 21 10:12:54.886083 kubelet[2567]: E0421 10:12:54.884915 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9tq" podUID="45c6765d-bd67-4624-ba22-eae28e77978f" Apr 21 10:12:56.251604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410086508.mount: Deactivated successfully. Apr 21 10:12:56.283964 containerd[1505]: time="2026-04-21T10:12:56.283919649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:56.285112 containerd[1505]: time="2026-04-21T10:12:56.285038442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:12:56.286147 containerd[1505]: time="2026-04-21T10:12:56.286115224Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:56.288045 containerd[1505]: time="2026-04-21T10:12:56.288016736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:56.289248 containerd[1505]: time="2026-04-21T10:12:56.289230699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.302077396s" Apr 21 10:12:56.289717 containerd[1505]: time="2026-04-21T10:12:56.289312910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:12:56.294843 containerd[1505]: time="2026-04-21T10:12:56.294802664Z" level=info msg="CreateContainer within sandbox \"611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:12:56.311044 containerd[1505]: time="2026-04-21T10:12:56.311005748Z" level=info msg="CreateContainer within sandbox \"611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2982b4d001b27a8b9b138bb9686b8577ed26e2f804a06a8a825bd88dfc3b2849\"" Apr 21 10:12:56.311637 containerd[1505]: time="2026-04-21T10:12:56.311609415Z" level=info msg="StartContainer for \"2982b4d001b27a8b9b138bb9686b8577ed26e2f804a06a8a825bd88dfc3b2849\"" Apr 21 10:12:56.346986 systemd[1]: Started cri-containerd-2982b4d001b27a8b9b138bb9686b8577ed26e2f804a06a8a825bd88dfc3b2849.scope - libcontainer container 2982b4d001b27a8b9b138bb9686b8577ed26e2f804a06a8a825bd88dfc3b2849. Apr 21 10:12:56.375612 containerd[1505]: time="2026-04-21T10:12:56.375575614Z" level=info msg="StartContainer for \"2982b4d001b27a8b9b138bb9686b8577ed26e2f804a06a8a825bd88dfc3b2849\" returns successfully" Apr 21 10:12:56.416563 systemd[1]: cri-containerd-2982b4d001b27a8b9b138bb9686b8577ed26e2f804a06a8a825bd88dfc3b2849.scope: Deactivated successfully. Apr 21 10:12:56.513562 containerd[1505]: time="2026-04-21T10:12:56.513413637Z" level=info msg="shim disconnected" id=2982b4d001b27a8b9b138bb9686b8577ed26e2f804a06a8a825bd88dfc3b2849 namespace=k8s.io Apr 21 10:12:56.513562 containerd[1505]: time="2026-04-21T10:12:56.513468048Z" level=warning msg="cleaning up after shim disconnected" id=2982b4d001b27a8b9b138bb9686b8577ed26e2f804a06a8a825bd88dfc3b2849 namespace=k8s.io Apr 21 10:12:56.513562 containerd[1505]: time="2026-04-21T10:12:56.513477408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:56.886696 kubelet[2567]: E0421 10:12:56.885175 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9tq" podUID="45c6765d-bd67-4624-ba22-eae28e77978f" Apr 21 10:12:56.997728 containerd[1505]: time="2026-04-21T10:12:56.997628790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:12:57.250725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2982b4d001b27a8b9b138bb9686b8577ed26e2f804a06a8a825bd88dfc3b2849-rootfs.mount: Deactivated successfully. Apr 21 10:12:58.890231 kubelet[2567]: E0421 10:12:58.890177 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm9tq" podUID="45c6765d-bd67-4624-ba22-eae28e77978f" Apr 21 10:12:59.721102 containerd[1505]: time="2026-04-21T10:12:59.721057212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:59.722246 containerd[1505]: time="2026-04-21T10:12:59.722159902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:12:59.723120 containerd[1505]: time="2026-04-21T10:12:59.723082271Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:59.725990 containerd[1505]: time="2026-04-21T10:12:59.725964299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.728236348s" Apr 21 10:12:59.725990 containerd[1505]: time="2026-04-21T10:12:59.725987299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:12:59.726534 containerd[1505]: time="2026-04-21T10:12:59.725363802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:59.730138 containerd[1505]: time="2026-04-21T10:12:59.730113647Z" level=info msg="CreateContainer within sandbox \"611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:12:59.743500 containerd[1505]: time="2026-04-21T10:12:59.743471804Z" level=info msg="CreateContainer within sandbox \"611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8d65138f1c5baa821251895fd66eab8e1103a4685f1872aea3503747ecbc758d\"" Apr 21 10:12:59.743970 containerd[1505]: time="2026-04-21T10:12:59.743881528Z" level=info msg="StartContainer for \"8d65138f1c5baa821251895fd66eab8e1103a4685f1872aea3503747ecbc758d\"" Apr 21 10:12:59.773923 systemd[1]: Started cri-containerd-8d65138f1c5baa821251895fd66eab8e1103a4685f1872aea3503747ecbc758d.scope - libcontainer container 8d65138f1c5baa821251895fd66eab8e1103a4685f1872aea3503747ecbc758d. Apr 21 10:12:59.806396 containerd[1505]: time="2026-04-21T10:12:59.806365250Z" level=info msg="StartContainer for \"8d65138f1c5baa821251895fd66eab8e1103a4685f1872aea3503747ecbc758d\" returns successfully" Apr 21 10:13:00.276193 systemd[1]: cri-containerd-8d65138f1c5baa821251895fd66eab8e1103a4685f1872aea3503747ecbc758d.scope: Deactivated successfully. Apr 21 10:13:00.293598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d65138f1c5baa821251895fd66eab8e1103a4685f1872aea3503747ecbc758d-rootfs.mount: Deactivated successfully. Apr 21 10:13:00.302726 containerd[1505]: time="2026-04-21T10:13:00.302664690Z" level=info msg="shim disconnected" id=8d65138f1c5baa821251895fd66eab8e1103a4685f1872aea3503747ecbc758d namespace=k8s.io Apr 21 10:13:00.302726 containerd[1505]: time="2026-04-21T10:13:00.302720871Z" level=warning msg="cleaning up after shim disconnected" id=8d65138f1c5baa821251895fd66eab8e1103a4685f1872aea3503747ecbc758d namespace=k8s.io Apr 21 10:13:00.302726 containerd[1505]: time="2026-04-21T10:13:00.302727731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:13:00.359352 kubelet[2567]: I0421 10:13:00.359315 2567 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 21 10:13:00.395376 systemd[1]: Created slice kubepods-burstable-podb4deaa4d_ddd7_4d60_919c_5bfca2a7dd6c.slice - libcontainer container kubepods-burstable-podb4deaa4d_ddd7_4d60_919c_5bfca2a7dd6c.slice. Apr 21 10:13:00.403183 systemd[1]: Created slice kubepods-besteffort-podcf17b894_678a_46fa_83ff_56280e6c52d6.slice - libcontainer container kubepods-besteffort-podcf17b894_678a_46fa_83ff_56280e6c52d6.slice. Apr 21 10:13:00.410073 systemd[1]: Created slice kubepods-besteffort-pod563db82c_210b_4750_a813_d95c4fea43ab.slice - libcontainer container kubepods-besteffort-pod563db82c_210b_4750_a813_d95c4fea43ab.slice. Apr 21 10:13:00.419009 systemd[1]: Created slice kubepods-burstable-poddf642170_a65e_4ca4_b9a5_68415ff77a88.slice - libcontainer container kubepods-burstable-poddf642170_a65e_4ca4_b9a5_68415ff77a88.slice. Apr 21 10:13:00.423785 systemd[1]: Created slice kubepods-besteffort-pod33a681f4_231e_46d8_acc0_27e3c3cf72d1.slice - libcontainer container kubepods-besteffort-pod33a681f4_231e_46d8_acc0_27e3c3cf72d1.slice. Apr 21 10:13:00.425038 kubelet[2567]: I0421 10:13:00.424860 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/33a681f4-231e-46d8-acc0-27e3c3cf72d1-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-8vcnx\" (UID: \"33a681f4-231e-46d8-acc0-27e3c3cf72d1\") " pod="calico-system/goldmane-cccfbd5cf-8vcnx" Apr 21 10:13:00.426194 kubelet[2567]: I0421 10:13:00.426182 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/563db82c-210b-4750-a813-d95c4fea43ab-calico-apiserver-certs\") pod \"calico-apiserver-7f7c8cd7bf-5zwrh\" (UID: \"563db82c-210b-4750-a813-d95c4fea43ab\") " pod="calico-system/calico-apiserver-7f7c8cd7bf-5zwrh" Apr 21 10:13:00.426830 kubelet[2567]: I0421 10:13:00.426537 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqcsd\" (UniqueName: \"kubernetes.io/projected/df642170-a65e-4ca4-b9a5-68415ff77a88-kube-api-access-zqcsd\") pod \"coredns-66bc5c9577-n2ccb\" (UID: \"df642170-a65e-4ca4-b9a5-68415ff77a88\") " pod="kube-system/coredns-66bc5c9577-n2ccb" Apr 21 10:13:00.426830 kubelet[2567]: I0421 10:13:00.426561 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbs2m\" (UniqueName: \"kubernetes.io/projected/563db82c-210b-4750-a813-d95c4fea43ab-kube-api-access-tbs2m\") pod \"calico-apiserver-7f7c8cd7bf-5zwrh\" (UID: \"563db82c-210b-4750-a813-d95c4fea43ab\") " pod="calico-system/calico-apiserver-7f7c8cd7bf-5zwrh" Apr 21 10:13:00.426830 kubelet[2567]: I0421 10:13:00.426577 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjs8\" (UniqueName: \"kubernetes.io/projected/cf17b894-678a-46fa-83ff-56280e6c52d6-kube-api-access-chjs8\") pod \"calico-kube-controllers-df6fcc68c-6prn8\" (UID: \"cf17b894-678a-46fa-83ff-56280e6c52d6\") " pod="calico-system/calico-kube-controllers-df6fcc68c-6prn8" Apr 21 10:13:00.426830 kubelet[2567]: I0421 10:13:00.426699 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c-config-volume\") pod \"coredns-66bc5c9577-9qxs6\" (UID: \"b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c\") " pod="kube-system/coredns-66bc5c9577-9qxs6" Apr 21 10:13:00.426830 kubelet[2567]: I0421 10:13:00.426713 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a681f4-231e-46d8-acc0-27e3c3cf72d1-config\") pod \"goldmane-cccfbd5cf-8vcnx\" (UID: \"33a681f4-231e-46d8-acc0-27e3c3cf72d1\") " pod="calico-system/goldmane-cccfbd5cf-8vcnx" Apr 21 10:13:00.427079 kubelet[2567]: I0421 10:13:00.426723 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33a681f4-231e-46d8-acc0-27e3c3cf72d1-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-8vcnx\" (UID: \"33a681f4-231e-46d8-acc0-27e3c3cf72d1\") " pod="calico-system/goldmane-cccfbd5cf-8vcnx" Apr 21 10:13:00.427079 kubelet[2567]: I0421 10:13:00.426734 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf17b894-678a-46fa-83ff-56280e6c52d6-tigera-ca-bundle\") pod \"calico-kube-controllers-df6fcc68c-6prn8\" (UID: \"cf17b894-678a-46fa-83ff-56280e6c52d6\") " pod="calico-system/calico-kube-controllers-df6fcc68c-6prn8" Apr 21 10:13:00.427079 kubelet[2567]: I0421 10:13:00.426745 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df642170-a65e-4ca4-b9a5-68415ff77a88-config-volume\") pod \"coredns-66bc5c9577-n2ccb\" (UID: \"df642170-a65e-4ca4-b9a5-68415ff77a88\") " pod="kube-system/coredns-66bc5c9577-n2ccb" Apr 21 10:13:00.428627 kubelet[2567]: I0421 10:13:00.427853 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4t7q\" (UniqueName: \"kubernetes.io/projected/b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c-kube-api-access-q4t7q\") pod \"coredns-66bc5c9577-9qxs6\" (UID: \"b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c\") " pod="kube-system/coredns-66bc5c9577-9qxs6" Apr 21 10:13:00.428627 kubelet[2567]: I0421 10:13:00.427872 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr6s9\" (UniqueName: \"kubernetes.io/projected/33a681f4-231e-46d8-acc0-27e3c3cf72d1-kube-api-access-vr6s9\") pod \"goldmane-cccfbd5cf-8vcnx\" (UID: \"33a681f4-231e-46d8-acc0-27e3c3cf72d1\") " pod="calico-system/goldmane-cccfbd5cf-8vcnx" Apr 21 10:13:00.434165 systemd[1]: Created slice kubepods-besteffort-pod022153ec_c601_49db_a266_f980fee052c1.slice - libcontainer container kubepods-besteffort-pod022153ec_c601_49db_a266_f980fee052c1.slice. Apr 21 10:13:00.441058 systemd[1]: Created slice kubepods-besteffort-pod421160f3_b971_4667_88a9_535bf4abfe5b.slice - libcontainer container kubepods-besteffort-pod421160f3_b971_4667_88a9_535bf4abfe5b.slice. Apr 21 10:13:00.529415 kubelet[2567]: I0421 10:13:00.529217 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/022153ec-c601-49db-a266-f980fee052c1-nginx-config\") pod \"whisker-85c6676df8-6p4r8\" (UID: \"022153ec-c601-49db-a266-f980fee052c1\") " pod="calico-system/whisker-85c6676df8-6p4r8" Apr 21 10:13:00.529415 kubelet[2567]: I0421 10:13:00.529297 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/022153ec-c601-49db-a266-f980fee052c1-whisker-backend-key-pair\") pod \"whisker-85c6676df8-6p4r8\" (UID: \"022153ec-c601-49db-a266-f980fee052c1\") " pod="calico-system/whisker-85c6676df8-6p4r8" Apr 21 10:13:00.530830 kubelet[2567]: I0421 10:13:00.530692 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/421160f3-b971-4667-88a9-535bf4abfe5b-calico-apiserver-certs\") pod \"calico-apiserver-7f7c8cd7bf-xtfdc\" (UID: \"421160f3-b971-4667-88a9-535bf4abfe5b\") " pod="calico-system/calico-apiserver-7f7c8cd7bf-xtfdc" Apr 21 10:13:00.531048 kubelet[2567]: I0421 10:13:00.530897 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm5st\" (UniqueName: \"kubernetes.io/projected/421160f3-b971-4667-88a9-535bf4abfe5b-kube-api-access-fm5st\") pod \"calico-apiserver-7f7c8cd7bf-xtfdc\" (UID: \"421160f3-b971-4667-88a9-535bf4abfe5b\") " pod="calico-system/calico-apiserver-7f7c8cd7bf-xtfdc" Apr 21 10:13:00.531048 kubelet[2567]: I0421 10:13:00.530930 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/022153ec-c601-49db-a266-f980fee052c1-whisker-ca-bundle\") pod \"whisker-85c6676df8-6p4r8\" (UID: \"022153ec-c601-49db-a266-f980fee052c1\") " pod="calico-system/whisker-85c6676df8-6p4r8" Apr 21 10:13:00.531048 kubelet[2567]: I0421 10:13:00.530959 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k28t\" (UniqueName: \"kubernetes.io/projected/022153ec-c601-49db-a266-f980fee052c1-kube-api-access-7k28t\") pod \"whisker-85c6676df8-6p4r8\" (UID: \"022153ec-c601-49db-a266-f980fee052c1\") " pod="calico-system/whisker-85c6676df8-6p4r8" Apr 21 10:13:00.705713 containerd[1505]: time="2026-04-21T10:13:00.705568608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9qxs6,Uid:b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:00.710973 containerd[1505]: time="2026-04-21T10:13:00.710938006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df6fcc68c-6prn8,Uid:cf17b894-678a-46fa-83ff-56280e6c52d6,Namespace:calico-system,Attempt:0,}" Apr 21 10:13:00.716455 containerd[1505]: time="2026-04-21T10:13:00.716422324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c8cd7bf-5zwrh,Uid:563db82c-210b-4750-a813-d95c4fea43ab,Namespace:calico-system,Attempt:0,}" Apr 21 10:13:00.731379 containerd[1505]: time="2026-04-21T10:13:00.731331787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n2ccb,Uid:df642170-a65e-4ca4-b9a5-68415ff77a88,Namespace:kube-system,Attempt:0,}" Apr 21 10:13:00.732166 containerd[1505]: time="2026-04-21T10:13:00.731902893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8vcnx,Uid:33a681f4-231e-46d8-acc0-27e3c3cf72d1,Namespace:calico-system,Attempt:0,}" Apr 21 10:13:00.742422 containerd[1505]: time="2026-04-21T10:13:00.742391696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c6676df8-6p4r8,Uid:022153ec-c601-49db-a266-f980fee052c1,Namespace:calico-system,Attempt:0,}" Apr 21 10:13:00.749853 containerd[1505]: time="2026-04-21T10:13:00.747867714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c8cd7bf-xtfdc,Uid:421160f3-b971-4667-88a9-535bf4abfe5b,Namespace:calico-system,Attempt:0,}" Apr 21 10:13:00.884964 containerd[1505]: time="2026-04-21T10:13:00.884869854Z" level=error msg="Failed to destroy network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.886183 containerd[1505]: time="2026-04-21T10:13:00.886157706Z" level=error msg="encountered an error cleaning up failed sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.887407 containerd[1505]: time="2026-04-21T10:13:00.886331778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9qxs6,Uid:b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.887494 kubelet[2567]: E0421 10:13:00.886517 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.887494 kubelet[2567]: E0421 10:13:00.886560 2567 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9qxs6" Apr 21 10:13:00.887494 kubelet[2567]: E0421 10:13:00.886576 2567 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9qxs6" Apr 21 10:13:00.887618 kubelet[2567]: E0421 10:13:00.886619 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9qxs6_kube-system(b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9qxs6_kube-system(b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9qxs6" podUID="b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c" Apr 21 10:13:00.890790 containerd[1505]: time="2026-04-21T10:13:00.890314864Z" level=error msg="Failed to destroy network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.891208 containerd[1505]: time="2026-04-21T10:13:00.891136461Z" level=error msg="encountered an error cleaning up failed sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.891208 containerd[1505]: time="2026-04-21T10:13:00.891174611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c8cd7bf-5zwrh,Uid:563db82c-210b-4750-a813-d95c4fea43ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.891696 kubelet[2567]: E0421 10:13:00.891342 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.891696 kubelet[2567]: E0421 10:13:00.891386 2567 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f7c8cd7bf-5zwrh" Apr 21 10:13:00.891696 kubelet[2567]: E0421 10:13:00.891400 2567 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f7c8cd7bf-5zwrh" Apr 21 10:13:00.891775 kubelet[2567]: E0421 10:13:00.891430 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f7c8cd7bf-5zwrh_calico-system(563db82c-210b-4750-a813-d95c4fea43ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f7c8cd7bf-5zwrh_calico-system(563db82c-210b-4750-a813-d95c4fea43ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f7c8cd7bf-5zwrh" podUID="563db82c-210b-4750-a813-d95c4fea43ab" Apr 21 10:13:00.899302 systemd[1]: Created slice kubepods-besteffort-pod45c6765d_bd67_4624_ba22_eae28e77978f.slice - libcontainer container kubepods-besteffort-pod45c6765d_bd67_4624_ba22_eae28e77978f.slice. Apr 21 10:13:00.905350 containerd[1505]: time="2026-04-21T10:13:00.904541250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9tq,Uid:45c6765d-bd67-4624-ba22-eae28e77978f,Namespace:calico-system,Attempt:0,}" Apr 21 10:13:00.912542 containerd[1505]: time="2026-04-21T10:13:00.912517811Z" level=error msg="Failed to destroy network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.912976 containerd[1505]: time="2026-04-21T10:13:00.912955415Z" level=error msg="encountered an error cleaning up failed sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.913401 containerd[1505]: time="2026-04-21T10:13:00.913365888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df6fcc68c-6prn8,Uid:cf17b894-678a-46fa-83ff-56280e6c52d6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.913650 kubelet[2567]: E0421 10:13:00.913603 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.913974 kubelet[2567]: E0421 10:13:00.913738 2567 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df6fcc68c-6prn8" Apr 21 10:13:00.913974 kubelet[2567]: E0421 10:13:00.913757 2567 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-df6fcc68c-6prn8" Apr 21 10:13:00.913974 kubelet[2567]: E0421 10:13:00.913833 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-df6fcc68c-6prn8_calico-system(cf17b894-678a-46fa-83ff-56280e6c52d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-df6fcc68c-6prn8_calico-system(cf17b894-678a-46fa-83ff-56280e6c52d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-df6fcc68c-6prn8" podUID="cf17b894-678a-46fa-83ff-56280e6c52d6" Apr 21 10:13:00.937212 containerd[1505]: time="2026-04-21T10:13:00.937155770Z" level=error msg="Failed to destroy network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.937584 containerd[1505]: time="2026-04-21T10:13:00.937552204Z" level=error msg="encountered an error cleaning up failed sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.937634 containerd[1505]: time="2026-04-21T10:13:00.937607144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8vcnx,Uid:33a681f4-231e-46d8-acc0-27e3c3cf72d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.937894 kubelet[2567]: E0421 10:13:00.937809 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.938038 kubelet[2567]: E0421 10:13:00.937912 2567 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-8vcnx" Apr 21 10:13:00.938069 kubelet[2567]: E0421 10:13:00.938046 2567 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-8vcnx" Apr 21 10:13:00.938116 kubelet[2567]: E0421 10:13:00.938099 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-8vcnx_calico-system(33a681f4-231e-46d8-acc0-27e3c3cf72d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-8vcnx_calico-system(33a681f4-231e-46d8-acc0-27e3c3cf72d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-8vcnx" podUID="33a681f4-231e-46d8-acc0-27e3c3cf72d1" Apr 21 10:13:00.957972 containerd[1505]: time="2026-04-21T10:13:00.957902415Z" level=error msg="Failed to destroy network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.958328 containerd[1505]: time="2026-04-21T10:13:00.958186708Z" level=error msg="encountered an error cleaning up failed sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.958328 containerd[1505]: time="2026-04-21T10:13:00.958230808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c6676df8-6p4r8,Uid:022153ec-c601-49db-a266-f980fee052c1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.958436 kubelet[2567]: E0421 10:13:00.958395 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.958575 kubelet[2567]: E0421 10:13:00.958433 2567 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85c6676df8-6p4r8" Apr 21 10:13:00.958575 kubelet[2567]: E0421 10:13:00.958450 2567 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85c6676df8-6p4r8" Apr 21 10:13:00.958575 kubelet[2567]: E0421 10:13:00.958490 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85c6676df8-6p4r8_calico-system(022153ec-c601-49db-a266-f980fee052c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85c6676df8-6p4r8_calico-system(022153ec-c601-49db-a266-f980fee052c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85c6676df8-6p4r8" podUID="022153ec-c601-49db-a266-f980fee052c1" Apr 21 10:13:00.975503 containerd[1505]: time="2026-04-21T10:13:00.975448771Z" level=error msg="Failed to destroy network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.976156 containerd[1505]: time="2026-04-21T10:13:00.976089387Z" level=error msg="encountered an error cleaning up failed sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.976156 containerd[1505]: time="2026-04-21T10:13:00.976136367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c8cd7bf-xtfdc,Uid:421160f3-b971-4667-88a9-535bf4abfe5b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.976394 kubelet[2567]: E0421 10:13:00.976310 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.976394 kubelet[2567]: E0421 10:13:00.976362 2567 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f7c8cd7bf-xtfdc" Apr 21 10:13:00.976446 kubelet[2567]: E0421 10:13:00.976394 2567 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f7c8cd7bf-xtfdc" Apr 21 10:13:00.976470 kubelet[2567]: E0421 10:13:00.976442 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f7c8cd7bf-xtfdc_calico-system(421160f3-b971-4667-88a9-535bf4abfe5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f7c8cd7bf-xtfdc_calico-system(421160f3-b971-4667-88a9-535bf4abfe5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f7c8cd7bf-xtfdc" podUID="421160f3-b971-4667-88a9-535bf4abfe5b" Apr 21 10:13:00.977726 containerd[1505]: time="2026-04-21T10:13:00.977502700Z" level=error msg="Failed to destroy network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.977985 containerd[1505]: time="2026-04-21T10:13:00.977967074Z" level=error msg="encountered an error cleaning up failed sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.978102 containerd[1505]: time="2026-04-21T10:13:00.978060135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n2ccb,Uid:df642170-a65e-4ca4-b9a5-68415ff77a88,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.978279 kubelet[2567]: E0421 10:13:00.978261 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:00.978451 kubelet[2567]: E0421 10:13:00.978319 2567 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-n2ccb" Apr 21 10:13:00.978451 kubelet[2567]: E0421 10:13:00.978332 2567 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-n2ccb" Apr 21 10:13:00.978573 kubelet[2567]: E0421 10:13:00.978540 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-n2ccb_kube-system(df642170-a65e-4ca4-b9a5-68415ff77a88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-n2ccb_kube-system(df642170-a65e-4ca4-b9a5-68415ff77a88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-n2ccb" podUID="df642170-a65e-4ca4-b9a5-68415ff77a88" Apr 21 10:13:00.999718 containerd[1505]: time="2026-04-21T10:13:00.999677927Z" level=error msg="Failed to destroy network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.000198 containerd[1505]: time="2026-04-21T10:13:01.000175741Z" level=error msg="encountered an error cleaning up failed sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.000244 containerd[1505]: time="2026-04-21T10:13:01.000213022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9tq,Uid:45c6765d-bd67-4624-ba22-eae28e77978f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.000389 kubelet[2567]: E0421 10:13:01.000355 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.000454 kubelet[2567]: E0421 10:13:01.000399 2567 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm9tq" Apr 21 10:13:01.000454 kubelet[2567]: E0421 10:13:01.000413 2567 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm9tq" Apr 21 10:13:01.000504 kubelet[2567]: E0421 10:13:01.000452 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jm9tq_calico-system(45c6765d-bd67-4624-ba22-eae28e77978f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jm9tq_calico-system(45c6765d-bd67-4624-ba22-eae28e77978f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jm9tq" podUID="45c6765d-bd67-4624-ba22-eae28e77978f" Apr 21 10:13:01.027630 kubelet[2567]: I0421 10:13:01.027536 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:01.028255 containerd[1505]: time="2026-04-21T10:13:01.028229967Z" level=info msg="StopPodSandbox for \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\"" Apr 21 10:13:01.028533 containerd[1505]: time="2026-04-21T10:13:01.028497309Z" level=info msg="Ensure that sandbox 7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b in task-service has been cleanup successfully" Apr 21 10:13:01.029336 kubelet[2567]: I0421 10:13:01.029255 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:01.029994 containerd[1505]: time="2026-04-21T10:13:01.029693679Z" level=info msg="StopPodSandbox for \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\"" Apr 21 10:13:01.029994 containerd[1505]: time="2026-04-21T10:13:01.029832770Z" level=info msg="Ensure that sandbox 759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5 in task-service has been cleanup successfully" Apr 21 10:13:01.030451 kubelet[2567]: I0421 10:13:01.030204 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:01.030901 containerd[1505]: time="2026-04-21T10:13:01.030884889Z" level=info msg="StopPodSandbox for \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\"" Apr 21 10:13:01.032952 containerd[1505]: time="2026-04-21T10:13:01.032928216Z" level=info msg="Ensure that sandbox 729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea in task-service has been cleanup successfully" Apr 21 10:13:01.041513 kubelet[2567]: I0421 10:13:01.040567 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:01.044205 containerd[1505]: time="2026-04-21T10:13:01.043783798Z" level=info msg="StopPodSandbox for \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\"" Apr 21 10:13:01.044205 containerd[1505]: time="2026-04-21T10:13:01.044036550Z" level=info msg="Ensure that sandbox 81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e in task-service has been cleanup successfully" Apr 21 10:13:01.053095 kubelet[2567]: I0421 10:13:01.053069 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:01.056349 containerd[1505]: time="2026-04-21T10:13:01.056307352Z" level=info msg="StopPodSandbox for \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\"" Apr 21 10:13:01.056495 containerd[1505]: time="2026-04-21T10:13:01.056478624Z" level=info msg="Ensure that sandbox 8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656 in task-service has been cleanup successfully" Apr 21 10:13:01.062119 kubelet[2567]: I0421 10:13:01.062092 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:01.063236 containerd[1505]: time="2026-04-21T10:13:01.063208440Z" level=info msg="StopPodSandbox for \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\"" Apr 21 10:13:01.063970 kubelet[2567]: I0421 10:13:01.063947 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:01.064610 containerd[1505]: time="2026-04-21T10:13:01.064570951Z" level=info msg="StopPodSandbox for \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\"" Apr 21 10:13:01.065252 containerd[1505]: time="2026-04-21T10:13:01.065231496Z" level=info msg="Ensure that sandbox 280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c in task-service has been cleanup successfully" Apr 21 10:13:01.065906 containerd[1505]: time="2026-04-21T10:13:01.065833922Z" level=info msg="Ensure that sandbox 41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a in task-service has been cleanup successfully" Apr 21 10:13:01.077617 kubelet[2567]: I0421 10:13:01.076449 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:01.078985 containerd[1505]: time="2026-04-21T10:13:01.078960052Z" level=info msg="StopPodSandbox for \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\"" Apr 21 10:13:01.079232 containerd[1505]: time="2026-04-21T10:13:01.079208083Z" level=info msg="Ensure that sandbox a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9 in task-service has been cleanup successfully" Apr 21 10:13:01.095435 containerd[1505]: time="2026-04-21T10:13:01.095406279Z" level=info msg="CreateContainer within sandbox \"611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:13:01.130349 containerd[1505]: time="2026-04-21T10:13:01.130305772Z" level=error msg="StopPodSandbox for \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\" failed" error="failed to destroy network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.131792 kubelet[2567]: E0421 10:13:01.131748 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:01.131970 kubelet[2567]: E0421 10:13:01.131806 2567 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e"} Apr 21 10:13:01.132018 kubelet[2567]: E0421 10:13:01.131979 2567 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"563db82c-210b-4750-a813-d95c4fea43ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:13:01.132119 kubelet[2567]: E0421 10:13:01.132098 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"563db82c-210b-4750-a813-d95c4fea43ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f7c8cd7bf-5zwrh" podUID="563db82c-210b-4750-a813-d95c4fea43ab" Apr 21 10:13:01.150208 containerd[1505]: time="2026-04-21T10:13:01.150071837Z" level=info msg="CreateContainer within sandbox \"611731b6d1e52f344cc8f09d10cf39ae5d7c1e1fd4e319a5287d40b9e26a4d7e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e15e7d8e9c1b4ceb67982a033723b22eed2e68503bd1ac550eaab09118691dd6\"" Apr 21 10:13:01.153857 containerd[1505]: time="2026-04-21T10:13:01.152303306Z" level=info msg="StartContainer for \"e15e7d8e9c1b4ceb67982a033723b22eed2e68503bd1ac550eaab09118691dd6\"" Apr 21 10:13:01.154392 containerd[1505]: time="2026-04-21T10:13:01.154355894Z" level=error msg="StopPodSandbox for \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\" failed" error="failed to destroy network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.154617 kubelet[2567]: E0421 10:13:01.154591 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:01.154737 kubelet[2567]: E0421 10:13:01.154723 2567 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5"} Apr 21 10:13:01.154827 kubelet[2567]: E0421 10:13:01.154806 2567 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf17b894-678a-46fa-83ff-56280e6c52d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:13:01.154940 kubelet[2567]: E0421 10:13:01.154925 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf17b894-678a-46fa-83ff-56280e6c52d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-df6fcc68c-6prn8" podUID="cf17b894-678a-46fa-83ff-56280e6c52d6" Apr 21 10:13:01.156847 containerd[1505]: time="2026-04-21T10:13:01.156803444Z" level=error msg="StopPodSandbox for \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\" failed" error="failed to destroy network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.156983 kubelet[2567]: E0421 10:13:01.156940 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:01.157042 kubelet[2567]: E0421 10:13:01.156988 2567 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a"} Apr 21 10:13:01.157042 kubelet[2567]: E0421 10:13:01.157007 2567 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33a681f4-231e-46d8-acc0-27e3c3cf72d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:13:01.157042 kubelet[2567]: E0421 10:13:01.157032 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33a681f4-231e-46d8-acc0-27e3c3cf72d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-8vcnx" podUID="33a681f4-231e-46d8-acc0-27e3c3cf72d1" Apr 21 10:13:01.170844 containerd[1505]: time="2026-04-21T10:13:01.170786930Z" level=error msg="StopPodSandbox for \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\" failed" error="failed to destroy network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.171849 kubelet[2567]: E0421 10:13:01.171168 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:01.171849 kubelet[2567]: E0421 10:13:01.171314 2567 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea"} Apr 21 10:13:01.171849 kubelet[2567]: E0421 10:13:01.171362 2567 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df642170-a65e-4ca4-b9a5-68415ff77a88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:13:01.171849 kubelet[2567]: E0421 10:13:01.171400 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df642170-a65e-4ca4-b9a5-68415ff77a88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-n2ccb" podUID="df642170-a65e-4ca4-b9a5-68415ff77a88" Apr 21 10:13:01.179707 containerd[1505]: time="2026-04-21T10:13:01.179673705Z" level=error msg="StopPodSandbox for \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\" failed" error="failed to destroy network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.180085 kubelet[2567]: E0421 10:13:01.180014 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:01.180085 kubelet[2567]: E0421 10:13:01.180081 2567 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b"} Apr 21 10:13:01.180174 kubelet[2567]: E0421 10:13:01.180106 2567 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"421160f3-b971-4667-88a9-535bf4abfe5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:13:01.180174 kubelet[2567]: E0421 10:13:01.180142 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"421160f3-b971-4667-88a9-535bf4abfe5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f7c8cd7bf-xtfdc" podUID="421160f3-b971-4667-88a9-535bf4abfe5b" Apr 21 10:13:01.183139 containerd[1505]: time="2026-04-21T10:13:01.183114124Z" level=error msg="StopPodSandbox for \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\" failed" error="failed to destroy network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.183395 kubelet[2567]: E0421 10:13:01.183353 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:01.183445 kubelet[2567]: E0421 10:13:01.183398 2567 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9"} Apr 21 10:13:01.183445 kubelet[2567]: E0421 10:13:01.183418 2567 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45c6765d-bd67-4624-ba22-eae28e77978f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:13:01.183445 kubelet[2567]: E0421 10:13:01.183435 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45c6765d-bd67-4624-ba22-eae28e77978f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jm9tq" podUID="45c6765d-bd67-4624-ba22-eae28e77978f" Apr 21 10:13:01.191657 containerd[1505]: time="2026-04-21T10:13:01.191630875Z" level=error msg="StopPodSandbox for \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\" failed" error="failed to destroy network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.191934 containerd[1505]: time="2026-04-21T10:13:01.191863127Z" level=error msg="StopPodSandbox for \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\" failed" error="failed to destroy network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:13:01.192661 kubelet[2567]: E0421 10:13:01.192622 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:01.192712 kubelet[2567]: E0421 10:13:01.192659 2567 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c"} Apr 21 10:13:01.192712 kubelet[2567]: E0421 10:13:01.192681 2567 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:13:01.192712 kubelet[2567]: E0421 10:13:01.192700 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9qxs6" podUID="b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c" Apr 21 10:13:01.192800 kubelet[2567]: E0421 10:13:01.192721 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:01.192800 kubelet[2567]: E0421 10:13:01.192731 2567 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656"} Apr 21 10:13:01.192800 kubelet[2567]: E0421 10:13:01.192741 2567 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"022153ec-c601-49db-a266-f980fee052c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:13:01.192800 kubelet[2567]: E0421 10:13:01.192788 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"022153ec-c601-49db-a266-f980fee052c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85c6676df8-6p4r8" podUID="022153ec-c601-49db-a266-f980fee052c1" Apr 21 10:13:01.212998 systemd[1]: Started cri-containerd-e15e7d8e9c1b4ceb67982a033723b22eed2e68503bd1ac550eaab09118691dd6.scope - libcontainer container e15e7d8e9c1b4ceb67982a033723b22eed2e68503bd1ac550eaab09118691dd6. Apr 21 10:13:01.240574 containerd[1505]: time="2026-04-21T10:13:01.240544894Z" level=info msg="StartContainer for \"e15e7d8e9c1b4ceb67982a033723b22eed2e68503bd1ac550eaab09118691dd6\" returns successfully" Apr 21 10:13:01.747052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea-shm.mount: Deactivated successfully. Apr 21 10:13:01.747230 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a-shm.mount: Deactivated successfully. Apr 21 10:13:01.747368 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e-shm.mount: Deactivated successfully. Apr 21 10:13:01.747545 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5-shm.mount: Deactivated successfully. Apr 21 10:13:01.747683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c-shm.mount: Deactivated successfully. Apr 21 10:13:02.090655 containerd[1505]: time="2026-04-21T10:13:02.089806654Z" level=info msg="StopPodSandbox for \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\"" Apr 21 10:13:02.120921 kubelet[2567]: I0421 10:13:02.120103 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f26lg" podStartSLOduration=2.966810513 podStartE2EDuration="15.120090293s" podCreationTimestamp="2026-04-21 10:12:47 +0000 UTC" firstStartedPulling="2026-04-21 10:12:47.573706028 +0000 UTC m=+18.778397267" lastFinishedPulling="2026-04-21 10:12:59.726985808 +0000 UTC m=+30.931677047" observedRunningTime="2026-04-21 10:13:02.119584239 +0000 UTC m=+33.324275478" watchObservedRunningTime="2026-04-21 10:13:02.120090293 +0000 UTC m=+33.324781522" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.163 [INFO][3840] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.163 [INFO][3840] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" iface="eth0" netns="/var/run/netns/cni-a6946c33-18d9-07e6-cf2b-64c04fc209ed" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.164 [INFO][3840] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" iface="eth0" netns="/var/run/netns/cni-a6946c33-18d9-07e6-cf2b-64c04fc209ed" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.164 [INFO][3840] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" iface="eth0" netns="/var/run/netns/cni-a6946c33-18d9-07e6-cf2b-64c04fc209ed" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.164 [INFO][3840] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.164 [INFO][3840] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.187 [INFO][3847] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" HandleID="k8s-pod-network.8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.187 [INFO][3847] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.187 [INFO][3847] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.192 [WARNING][3847] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" HandleID="k8s-pod-network.8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.192 [INFO][3847] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" HandleID="k8s-pod-network.8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.195 [INFO][3847] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:02.202192 containerd[1505]: 2026-04-21 10:13:02.199 [INFO][3840] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:02.204350 containerd[1505]: time="2026-04-21T10:13:02.204245726Z" level=info msg="TearDown network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\" successfully" Apr 21 10:13:02.204350 containerd[1505]: time="2026-04-21T10:13:02.204274676Z" level=info msg="StopPodSandbox for \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\" returns successfully" Apr 21 10:13:02.205914 systemd[1]: run-netns-cni\x2da6946c33\x2d18d9\x2d07e6\x2dcf2b\x2d64c04fc209ed.mount: Deactivated successfully. Apr 21 10:13:02.346952 kubelet[2567]: I0421 10:13:02.346305 2567 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/022153ec-c601-49db-a266-f980fee052c1-whisker-ca-bundle\") pod \"022153ec-c601-49db-a266-f980fee052c1\" (UID: \"022153ec-c601-49db-a266-f980fee052c1\") " Apr 21 10:13:02.346952 kubelet[2567]: I0421 10:13:02.346351 2567 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/022153ec-c601-49db-a266-f980fee052c1-nginx-config\") pod \"022153ec-c601-49db-a266-f980fee052c1\" (UID: \"022153ec-c601-49db-a266-f980fee052c1\") " Apr 21 10:13:02.346952 kubelet[2567]: I0421 10:13:02.346384 2567 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k28t\" (UniqueName: \"kubernetes.io/projected/022153ec-c601-49db-a266-f980fee052c1-kube-api-access-7k28t\") pod \"022153ec-c601-49db-a266-f980fee052c1\" (UID: \"022153ec-c601-49db-a266-f980fee052c1\") " Apr 21 10:13:02.346952 kubelet[2567]: I0421 10:13:02.346407 2567 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/022153ec-c601-49db-a266-f980fee052c1-whisker-backend-key-pair\") pod \"022153ec-c601-49db-a266-f980fee052c1\" (UID: \"022153ec-c601-49db-a266-f980fee052c1\") " Apr 21 10:13:02.348263 kubelet[2567]: I0421 10:13:02.348196 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/022153ec-c601-49db-a266-f980fee052c1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "022153ec-c601-49db-a266-f980fee052c1" (UID: "022153ec-c601-49db-a266-f980fee052c1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:13:02.349345 kubelet[2567]: I0421 10:13:02.349208 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/022153ec-c601-49db-a266-f980fee052c1-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "022153ec-c601-49db-a266-f980fee052c1" (UID: "022153ec-c601-49db-a266-f980fee052c1"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:13:02.353341 kubelet[2567]: I0421 10:13:02.353287 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022153ec-c601-49db-a266-f980fee052c1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "022153ec-c601-49db-a266-f980fee052c1" (UID: "022153ec-c601-49db-a266-f980fee052c1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:13:02.353590 systemd[1]: var-lib-kubelet-pods-022153ec\x2dc601\x2d49db\x2da266\x2df980fee052c1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:13:02.356043 kubelet[2567]: I0421 10:13:02.355961 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022153ec-c601-49db-a266-f980fee052c1-kube-api-access-7k28t" (OuterVolumeSpecName: "kube-api-access-7k28t") pod "022153ec-c601-49db-a266-f980fee052c1" (UID: "022153ec-c601-49db-a266-f980fee052c1"). InnerVolumeSpecName "kube-api-access-7k28t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:13:02.360339 systemd[1]: var-lib-kubelet-pods-022153ec\x2dc601\x2d49db\x2da266\x2df980fee052c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7k28t.mount: Deactivated successfully. Apr 21 10:13:02.446799 kubelet[2567]: I0421 10:13:02.446730 2567 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/022153ec-c601-49db-a266-f980fee052c1-nginx-config\") on node \"ci-4081-3-7-5-d97ac59edd\" DevicePath \"\"" Apr 21 10:13:02.446987 kubelet[2567]: I0421 10:13:02.446926 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7k28t\" (UniqueName: \"kubernetes.io/projected/022153ec-c601-49db-a266-f980fee052c1-kube-api-access-7k28t\") on node \"ci-4081-3-7-5-d97ac59edd\" DevicePath \"\"" Apr 21 10:13:02.446987 kubelet[2567]: I0421 10:13:02.446947 2567 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/022153ec-c601-49db-a266-f980fee052c1-whisker-backend-key-pair\") on node \"ci-4081-3-7-5-d97ac59edd\" DevicePath \"\"" Apr 21 10:13:02.446987 kubelet[2567]: I0421 10:13:02.446962 2567 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/022153ec-c601-49db-a266-f980fee052c1-whisker-ca-bundle\") on node \"ci-4081-3-7-5-d97ac59edd\" DevicePath \"\"" Apr 21 10:13:02.897204 systemd[1]: Removed slice kubepods-besteffort-pod022153ec_c601_49db_a266_f980fee052c1.slice - libcontainer container kubepods-besteffort-pod022153ec_c601_49db_a266_f980fee052c1.slice. Apr 21 10:13:03.092544 kubelet[2567]: I0421 10:13:03.092481 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:13:03.145687 systemd[1]: Created slice kubepods-besteffort-pod2ba0a20d_c575_4aa9_b561_4a6ffbc85a67.slice - libcontainer container kubepods-besteffort-pod2ba0a20d_c575_4aa9_b561_4a6ffbc85a67.slice. Apr 21 10:13:03.254203 kubelet[2567]: I0421 10:13:03.253861 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2ba0a20d-c575-4aa9-b561-4a6ffbc85a67-nginx-config\") pod \"whisker-57fbb9bf87-d8nfn\" (UID: \"2ba0a20d-c575-4aa9-b561-4a6ffbc85a67\") " pod="calico-system/whisker-57fbb9bf87-d8nfn" Apr 21 10:13:03.254203 kubelet[2567]: I0421 10:13:03.253926 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ba0a20d-c575-4aa9-b561-4a6ffbc85a67-whisker-backend-key-pair\") pod \"whisker-57fbb9bf87-d8nfn\" (UID: \"2ba0a20d-c575-4aa9-b561-4a6ffbc85a67\") " pod="calico-system/whisker-57fbb9bf87-d8nfn" Apr 21 10:13:03.254203 kubelet[2567]: I0421 10:13:03.253956 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7g9s\" (UniqueName: \"kubernetes.io/projected/2ba0a20d-c575-4aa9-b561-4a6ffbc85a67-kube-api-access-f7g9s\") pod \"whisker-57fbb9bf87-d8nfn\" (UID: \"2ba0a20d-c575-4aa9-b561-4a6ffbc85a67\") " pod="calico-system/whisker-57fbb9bf87-d8nfn" Apr 21 10:13:03.254203 kubelet[2567]: I0421 10:13:03.253983 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ba0a20d-c575-4aa9-b561-4a6ffbc85a67-whisker-ca-bundle\") pod \"whisker-57fbb9bf87-d8nfn\" (UID: \"2ba0a20d-c575-4aa9-b561-4a6ffbc85a67\") " pod="calico-system/whisker-57fbb9bf87-d8nfn" Apr 21 10:13:03.451740 containerd[1505]: time="2026-04-21T10:13:03.451683842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fbb9bf87-d8nfn,Uid:2ba0a20d-c575-4aa9-b561-4a6ffbc85a67,Namespace:calico-system,Attempt:0,}" Apr 21 10:13:03.569402 systemd-networkd[1416]: calia5547689dc6: Link UP Apr 21 10:13:03.570647 systemd-networkd[1416]: calia5547689dc6: Gained carrier Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.484 [ERROR][3950] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.496 [INFO][3950] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0 whisker-57fbb9bf87- calico-system 2ba0a20d-c575-4aa9-b561-4a6ffbc85a67 914 0 2026-04-21 10:13:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57fbb9bf87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-5-d97ac59edd whisker-57fbb9bf87-d8nfn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia5547689dc6 [] [] }} ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Namespace="calico-system" Pod="whisker-57fbb9bf87-d8nfn" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.496 [INFO][3950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Namespace="calico-system" Pod="whisker-57fbb9bf87-d8nfn" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.522 [INFO][3963] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" HandleID="k8s-pod-network.e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.528 [INFO][3963] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" HandleID="k8s-pod-network.e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5-d97ac59edd", "pod":"whisker-57fbb9bf87-d8nfn", "timestamp":"2026-04-21 10:13:03.52292622 +0000 UTC"}, Hostname:"ci-4081-3-7-5-d97ac59edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f8f20)} Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.528 [INFO][3963] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.528 [INFO][3963] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.528 [INFO][3963] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5-d97ac59edd' Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.530 [INFO][3963] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.535 [INFO][3963] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.539 [INFO][3963] ipam/ipam.go 526: Trying affinity for 192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.542 [INFO][3963] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.544 [INFO][3963] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.544 [INFO][3963] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.546 [INFO][3963] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.551 [INFO][3963] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.555 [INFO][3963] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.193/26] block=192.168.29.192/26 handle="k8s-pod-network.e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.555 [INFO][3963] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.193/26] handle="k8s-pod-network.e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.555 [INFO][3963] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:03.595596 containerd[1505]: 2026-04-21 10:13:03.555 [INFO][3963] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.193/26] IPv6=[] ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" HandleID="k8s-pod-network.e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" Apr 21 10:13:03.596113 containerd[1505]: 2026-04-21 10:13:03.559 [INFO][3950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Namespace="calico-system" Pod="whisker-57fbb9bf87-d8nfn" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0", GenerateName:"whisker-57fbb9bf87-", Namespace:"calico-system", SelfLink:"", UID:"2ba0a20d-c575-4aa9-b561-4a6ffbc85a67", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57fbb9bf87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"", Pod:"whisker-57fbb9bf87-d8nfn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5547689dc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:03.596113 containerd[1505]: 2026-04-21 10:13:03.559 [INFO][3950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.193/32] ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Namespace="calico-system" Pod="whisker-57fbb9bf87-d8nfn" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" Apr 21 10:13:03.596113 containerd[1505]: 2026-04-21 10:13:03.559 [INFO][3950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5547689dc6 ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Namespace="calico-system" Pod="whisker-57fbb9bf87-d8nfn" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" Apr 21 10:13:03.596113 containerd[1505]: 2026-04-21 10:13:03.572 [INFO][3950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Namespace="calico-system" Pod="whisker-57fbb9bf87-d8nfn" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" Apr 21 10:13:03.596113 containerd[1505]: 2026-04-21 10:13:03.572 [INFO][3950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Namespace="calico-system" Pod="whisker-57fbb9bf87-d8nfn" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0", GenerateName:"whisker-57fbb9bf87-", Namespace:"calico-system", SelfLink:"", UID:"2ba0a20d-c575-4aa9-b561-4a6ffbc85a67", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57fbb9bf87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e", Pod:"whisker-57fbb9bf87-d8nfn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5547689dc6", MAC:"f2:89:a0:c4:00:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:03.596113 containerd[1505]: 2026-04-21 10:13:03.591 [INFO][3950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e" Namespace="calico-system" Pod="whisker-57fbb9bf87-d8nfn" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--57fbb9bf87--d8nfn-eth0" Apr 21 10:13:03.634638 containerd[1505]: time="2026-04-21T10:13:03.633046177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:03.634638 containerd[1505]: time="2026-04-21T10:13:03.633109277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:03.634638 containerd[1505]: time="2026-04-21T10:13:03.633125087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:03.634638 containerd[1505]: time="2026-04-21T10:13:03.633227088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:03.683954 systemd[1]: Started cri-containerd-e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e.scope - libcontainer container e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e. Apr 21 10:13:03.720965 containerd[1505]: time="2026-04-21T10:13:03.720869988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fbb9bf87-d8nfn,Uid:2ba0a20d-c575-4aa9-b561-4a6ffbc85a67,Namespace:calico-system,Attempt:0,} returns sandbox id \"e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e\"" Apr 21 10:13:03.724099 containerd[1505]: time="2026-04-21T10:13:03.723699849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:13:04.887203 kubelet[2567]: I0421 10:13:04.887136 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022153ec-c601-49db-a266-f980fee052c1" path="/var/lib/kubelet/pods/022153ec-c601-49db-a266-f980fee052c1/volumes" Apr 21 10:13:04.980722 kubelet[2567]: I0421 10:13:04.980273 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:13:05.466728 containerd[1505]: time="2026-04-21T10:13:05.466663885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:05.467779 containerd[1505]: time="2026-04-21T10:13:05.467613231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:13:05.468789 containerd[1505]: time="2026-04-21T10:13:05.468620937Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:05.470670 containerd[1505]: time="2026-04-21T10:13:05.470625580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:05.471749 containerd[1505]: time="2026-04-21T10:13:05.471341326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.747611007s" Apr 21 10:13:05.471749 containerd[1505]: time="2026-04-21T10:13:05.471365406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:13:05.474519 containerd[1505]: time="2026-04-21T10:13:05.474501566Z" level=info msg="CreateContainer within sandbox \"e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:13:05.486973 systemd-networkd[1416]: calia5547689dc6: Gained IPv6LL Apr 21 10:13:05.497407 containerd[1505]: time="2026-04-21T10:13:05.497351386Z" level=info msg="CreateContainer within sandbox \"e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5940529b6d10288784b89f73d9776e9626ca6fe3de4caeb24deaa7c562972219\"" Apr 21 10:13:05.498084 containerd[1505]: time="2026-04-21T10:13:05.498052441Z" level=info msg="StartContainer for \"5940529b6d10288784b89f73d9776e9626ca6fe3de4caeb24deaa7c562972219\"" Apr 21 10:13:05.523911 systemd[1]: Started cri-containerd-5940529b6d10288784b89f73d9776e9626ca6fe3de4caeb24deaa7c562972219.scope - libcontainer container 5940529b6d10288784b89f73d9776e9626ca6fe3de4caeb24deaa7c562972219. Apr 21 10:13:05.555228 containerd[1505]: time="2026-04-21T10:13:05.555200845Z" level=info msg="StartContainer for \"5940529b6d10288784b89f73d9776e9626ca6fe3de4caeb24deaa7c562972219\" returns successfully" Apr 21 10:13:05.557523 containerd[1505]: time="2026-04-21T10:13:05.557501240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:13:06.010862 kernel: calico-node[4117]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:13:06.434770 systemd-networkd[1416]: vxlan.calico: Link UP Apr 21 10:13:06.434777 systemd-networkd[1416]: vxlan.calico: Gained carrier Apr 21 10:13:06.889189 kubelet[2567]: I0421 10:13:06.889013 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:13:06.917925 systemd[1]: run-containerd-runc-k8s.io-e15e7d8e9c1b4ceb67982a033723b22eed2e68503bd1ac550eaab09118691dd6-runc.p5G9FL.mount: Deactivated successfully. Apr 21 10:13:07.401150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167961333.mount: Deactivated successfully. Apr 21 10:13:07.418919 containerd[1505]: time="2026-04-21T10:13:07.418874244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:07.420148 containerd[1505]: time="2026-04-21T10:13:07.420115211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:13:07.421119 containerd[1505]: time="2026-04-21T10:13:07.421077077Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:07.423121 containerd[1505]: time="2026-04-21T10:13:07.423091038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:07.423566 containerd[1505]: time="2026-04-21T10:13:07.423541672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.866015272s" Apr 21 10:13:07.423602 containerd[1505]: time="2026-04-21T10:13:07.423568442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:13:07.427445 containerd[1505]: time="2026-04-21T10:13:07.427420393Z" level=info msg="CreateContainer within sandbox \"e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:13:07.446873 containerd[1505]: time="2026-04-21T10:13:07.446720726Z" level=info msg="CreateContainer within sandbox \"e910de82a2eccda4f99d796b5dfb995237e7f4763290455bda458f85305b8a7e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fc5ce1c6cf6a0e15a456197012dc5777c14f48d98d4fe461a9bf3e7c88a7cc1d\"" Apr 21 10:13:07.447567 containerd[1505]: time="2026-04-21T10:13:07.447512171Z" level=info msg="StartContainer for \"fc5ce1c6cf6a0e15a456197012dc5777c14f48d98d4fe461a9bf3e7c88a7cc1d\"" Apr 21 10:13:07.473984 systemd[1]: Started cri-containerd-fc5ce1c6cf6a0e15a456197012dc5777c14f48d98d4fe461a9bf3e7c88a7cc1d.scope - libcontainer container fc5ce1c6cf6a0e15a456197012dc5777c14f48d98d4fe461a9bf3e7c88a7cc1d. Apr 21 10:13:07.514862 containerd[1505]: time="2026-04-21T10:13:07.514752422Z" level=info msg="StartContainer for \"fc5ce1c6cf6a0e15a456197012dc5777c14f48d98d4fe461a9bf3e7c88a7cc1d\" returns successfully" Apr 21 10:13:08.304604 systemd-networkd[1416]: vxlan.calico: Gained IPv6LL Apr 21 10:13:11.885620 containerd[1505]: time="2026-04-21T10:13:11.885060876Z" level=info msg="StopPodSandbox for \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\"" Apr 21 10:13:11.934789 kubelet[2567]: I0421 10:13:11.934395 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57fbb9bf87-d8nfn" podStartSLOduration=5.233470132 podStartE2EDuration="8.934380302s" podCreationTimestamp="2026-04-21 10:13:03 +0000 UTC" firstStartedPulling="2026-04-21 10:13:03.723168494 +0000 UTC m=+34.927859733" lastFinishedPulling="2026-04-21 10:13:07.424078664 +0000 UTC m=+38.628769903" observedRunningTime="2026-04-21 10:13:08.1266964 +0000 UTC m=+39.331387649" watchObservedRunningTime="2026-04-21 10:13:11.934380302 +0000 UTC m=+43.139071551" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.932 [INFO][4369] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.932 [INFO][4369] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" iface="eth0" netns="/var/run/netns/cni-462fcb76-d052-f3ba-2fbc-347811dc9718" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.933 [INFO][4369] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" iface="eth0" netns="/var/run/netns/cni-462fcb76-d052-f3ba-2fbc-347811dc9718" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.934 [INFO][4369] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" iface="eth0" netns="/var/run/netns/cni-462fcb76-d052-f3ba-2fbc-347811dc9718" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.935 [INFO][4369] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.935 [INFO][4369] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.951 [INFO][4376] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" HandleID="k8s-pod-network.7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.951 [INFO][4376] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.951 [INFO][4376] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.955 [WARNING][4376] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" HandleID="k8s-pod-network.7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.956 [INFO][4376] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" HandleID="k8s-pod-network.7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.957 [INFO][4376] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:11.961400 containerd[1505]: 2026-04-21 10:13:11.959 [INFO][4369] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:11.964029 containerd[1505]: time="2026-04-21T10:13:11.963889069Z" level=info msg="TearDown network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\" successfully" Apr 21 10:13:11.964029 containerd[1505]: time="2026-04-21T10:13:11.963927499Z" level=info msg="StopPodSandbox for \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\" returns successfully" Apr 21 10:13:11.964027 systemd[1]: run-netns-cni\x2d462fcb76\x2dd052\x2df3ba\x2d2fbc\x2d347811dc9718.mount: Deactivated successfully. Apr 21 10:13:11.966987 containerd[1505]: time="2026-04-21T10:13:11.966801942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c8cd7bf-xtfdc,Uid:421160f3-b971-4667-88a9-535bf4abfe5b,Namespace:calico-system,Attempt:1,}" Apr 21 10:13:12.060852 systemd-networkd[1416]: calicfd38b856da: Link UP Apr 21 10:13:12.062678 systemd-networkd[1416]: calicfd38b856da: Gained carrier Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.005 [INFO][4382] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0 calico-apiserver-7f7c8cd7bf- calico-system 421160f3-b971-4667-88a9-535bf4abfe5b 959 0 2026-04-21 10:12:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7c8cd7bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-5-d97ac59edd calico-apiserver-7f7c8cd7bf-xtfdc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calicfd38b856da [] [] }} ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-xtfdc" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.005 [INFO][4382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-xtfdc" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.026 [INFO][4394] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" HandleID="k8s-pod-network.6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.031 [INFO][4394] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" HandleID="k8s-pod-network.6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002774e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5-d97ac59edd", "pod":"calico-apiserver-7f7c8cd7bf-xtfdc", "timestamp":"2026-04-21 10:13:12.026174348 +0000 UTC"}, Hostname:"ci-4081-3-7-5-d97ac59edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00032b4a0)} Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.032 [INFO][4394] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.032 [INFO][4394] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.032 [INFO][4394] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5-d97ac59edd' Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.034 [INFO][4394] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.038 [INFO][4394] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.041 [INFO][4394] ipam/ipam.go 526: Trying affinity for 192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.043 [INFO][4394] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.045 [INFO][4394] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.045 [INFO][4394] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.046 [INFO][4394] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0 Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.049 [INFO][4394] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.055 [INFO][4394] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.194/26] block=192.168.29.192/26 handle="k8s-pod-network.6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.055 [INFO][4394] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.194/26] handle="k8s-pod-network.6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.055 [INFO][4394] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:12.078863 containerd[1505]: 2026-04-21 10:13:12.055 [INFO][4394] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.194/26] IPv6=[] ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" HandleID="k8s-pod-network.6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:12.079286 containerd[1505]: 2026-04-21 10:13:12.057 [INFO][4382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-xtfdc" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0", GenerateName:"calico-apiserver-7f7c8cd7bf-", Namespace:"calico-system", SelfLink:"", UID:"421160f3-b971-4667-88a9-535bf4abfe5b", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c8cd7bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"", Pod:"calico-apiserver-7f7c8cd7bf-xtfdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicfd38b856da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:12.079286 containerd[1505]: 2026-04-21 10:13:12.057 [INFO][4382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.194/32] ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-xtfdc" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:12.079286 containerd[1505]: 2026-04-21 10:13:12.057 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfd38b856da ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-xtfdc" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:12.079286 containerd[1505]: 2026-04-21 10:13:12.062 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-xtfdc" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:12.079286 containerd[1505]: 2026-04-21 10:13:12.063 [INFO][4382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-xtfdc" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0", GenerateName:"calico-apiserver-7f7c8cd7bf-", Namespace:"calico-system", SelfLink:"", UID:"421160f3-b971-4667-88a9-535bf4abfe5b", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c8cd7bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0", Pod:"calico-apiserver-7f7c8cd7bf-xtfdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicfd38b856da", MAC:"56:4e:af:26:48:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:12.079286 containerd[1505]: 2026-04-21 10:13:12.074 [INFO][4382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-xtfdc" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:12.107312 containerd[1505]: time="2026-04-21T10:13:12.107002417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:12.107312 containerd[1505]: time="2026-04-21T10:13:12.107054097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:12.107312 containerd[1505]: time="2026-04-21T10:13:12.107064747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:12.107472 containerd[1505]: time="2026-04-21T10:13:12.107246498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:12.130069 systemd[1]: run-containerd-runc-k8s.io-6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0-runc.JjzFWs.mount: Deactivated successfully. Apr 21 10:13:12.138946 systemd[1]: Started cri-containerd-6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0.scope - libcontainer container 6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0. Apr 21 10:13:12.174586 containerd[1505]: time="2026-04-21T10:13:12.174531699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c8cd7bf-xtfdc,Uid:421160f3-b971-4667-88a9-535bf4abfe5b,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0\"" Apr 21 10:13:12.177290 containerd[1505]: time="2026-04-21T10:13:12.177197151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:13:13.424120 systemd-networkd[1416]: calicfd38b856da: Gained IPv6LL Apr 21 10:13:13.887321 containerd[1505]: time="2026-04-21T10:13:13.886504665Z" level=info msg="StopPodSandbox for \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\"" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.950 [INFO][4477] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.953 [INFO][4477] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" iface="eth0" netns="/var/run/netns/cni-631be814-9242-1032-978a-e47f325eadc5" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.955 [INFO][4477] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" iface="eth0" netns="/var/run/netns/cni-631be814-9242-1032-978a-e47f325eadc5" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.955 [INFO][4477] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" iface="eth0" netns="/var/run/netns/cni-631be814-9242-1032-978a-e47f325eadc5" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.956 [INFO][4477] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.956 [INFO][4477] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.979 [INFO][4488] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" HandleID="k8s-pod-network.41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.980 [INFO][4488] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.980 [INFO][4488] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.985 [WARNING][4488] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" HandleID="k8s-pod-network.41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.985 [INFO][4488] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" HandleID="k8s-pod-network.41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.987 [INFO][4488] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:13.992507 containerd[1505]: 2026-04-21 10:13:13.989 [INFO][4477] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:13.993140 containerd[1505]: time="2026-04-21T10:13:13.992880930Z" level=info msg="TearDown network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\" successfully" Apr 21 10:13:13.993140 containerd[1505]: time="2026-04-21T10:13:13.992906560Z" level=info msg="StopPodSandbox for \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\" returns successfully" Apr 21 10:13:13.997570 systemd[1]: run-netns-cni\x2d631be814\x2d9242\x2d1032\x2d978a\x2de47f325eadc5.mount: Deactivated successfully. Apr 21 10:13:13.999161 containerd[1505]: time="2026-04-21T10:13:13.999130595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8vcnx,Uid:33a681f4-231e-46d8-acc0-27e3c3cf72d1,Namespace:calico-system,Attempt:1,}" Apr 21 10:13:14.117909 systemd-networkd[1416]: cali7409c8746b7: Link UP Apr 21 10:13:14.118106 systemd-networkd[1416]: cali7409c8746b7: Gained carrier Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.047 [INFO][4497] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0 goldmane-cccfbd5cf- calico-system 33a681f4-231e-46d8-acc0-27e3c3cf72d1 971 0 2026-04-21 10:12:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-5-d97ac59edd goldmane-cccfbd5cf-8vcnx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7409c8746b7 [] [] }} ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8vcnx" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.047 [INFO][4497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8vcnx" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.075 [INFO][4509] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" HandleID="k8s-pod-network.a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.081 [INFO][4509] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" HandleID="k8s-pod-network.a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdc80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5-d97ac59edd", "pod":"goldmane-cccfbd5cf-8vcnx", "timestamp":"2026-04-21 10:13:14.075052408 +0000 UTC"}, Hostname:"ci-4081-3-7-5-d97ac59edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036edc0)} Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.082 [INFO][4509] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.082 [INFO][4509] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.082 [INFO][4509] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5-d97ac59edd' Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.084 [INFO][4509] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.088 [INFO][4509] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.092 [INFO][4509] ipam/ipam.go 526: Trying affinity for 192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.094 [INFO][4509] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.096 [INFO][4509] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.096 [INFO][4509] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.098 [INFO][4509] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4 Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.102 [INFO][4509] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.109 [INFO][4509] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.195/26] block=192.168.29.192/26 handle="k8s-pod-network.a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.109 [INFO][4509] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.195/26] handle="k8s-pod-network.a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.109 [INFO][4509] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:14.139930 containerd[1505]: 2026-04-21 10:13:14.109 [INFO][4509] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.195/26] IPv6=[] ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" HandleID="k8s-pod-network.a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:14.140420 containerd[1505]: 2026-04-21 10:13:14.112 [INFO][4497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8vcnx" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"33a681f4-231e-46d8-acc0-27e3c3cf72d1", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"", Pod:"goldmane-cccfbd5cf-8vcnx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7409c8746b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:14.140420 containerd[1505]: 2026-04-21 10:13:14.112 [INFO][4497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.195/32] ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8vcnx" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:14.140420 containerd[1505]: 2026-04-21 10:13:14.112 [INFO][4497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7409c8746b7 ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8vcnx" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:14.140420 containerd[1505]: 2026-04-21 10:13:14.117 [INFO][4497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8vcnx" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:14.140420 containerd[1505]: 2026-04-21 10:13:14.118 [INFO][4497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8vcnx" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"33a681f4-231e-46d8-acc0-27e3c3cf72d1", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4", Pod:"goldmane-cccfbd5cf-8vcnx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7409c8746b7", MAC:"7e:d0:fe:d9:ce:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:14.140420 containerd[1505]: 2026-04-21 10:13:14.133 [INFO][4497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-8vcnx" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:14.163589 containerd[1505]: time="2026-04-21T10:13:14.163085758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:14.163589 containerd[1505]: time="2026-04-21T10:13:14.163180958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:14.163589 containerd[1505]: time="2026-04-21T10:13:14.163193368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:14.163589 containerd[1505]: time="2026-04-21T10:13:14.163412219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:14.196984 systemd[1]: Started cri-containerd-a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4.scope - libcontainer container a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4. Apr 21 10:13:14.201560 systemd[1]: run-containerd-runc-k8s.io-a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4-runc.e3H00l.mount: Deactivated successfully. Apr 21 10:13:14.276040 containerd[1505]: time="2026-04-21T10:13:14.275979013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-8vcnx,Uid:33a681f4-231e-46d8-acc0-27e3c3cf72d1,Namespace:calico-system,Attempt:1,} returns sandbox id \"a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4\"" Apr 21 10:13:14.677723 containerd[1505]: time="2026-04-21T10:13:14.677666572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:14.678634 containerd[1505]: time="2026-04-21T10:13:14.678532555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:13:14.679711 containerd[1505]: time="2026-04-21T10:13:14.679490579Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:14.681655 containerd[1505]: time="2026-04-21T10:13:14.681557146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:14.682370 containerd[1505]: time="2026-04-21T10:13:14.682040729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.504820668s" Apr 21 10:13:14.682370 containerd[1505]: time="2026-04-21T10:13:14.682074099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:13:14.683764 containerd[1505]: time="2026-04-21T10:13:14.683745065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:13:14.687075 containerd[1505]: time="2026-04-21T10:13:14.687046898Z" level=info msg="CreateContainer within sandbox \"6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:13:14.697376 containerd[1505]: time="2026-04-21T10:13:14.697338208Z" level=info msg="CreateContainer within sandbox \"6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af6421ca80f54bfdccad207679c7bfdad0adc6ce65de38659649bf8fe61824f3\"" Apr 21 10:13:14.698683 containerd[1505]: time="2026-04-21T10:13:14.697925280Z" level=info msg="StartContainer for \"af6421ca80f54bfdccad207679c7bfdad0adc6ce65de38659649bf8fe61824f3\"" Apr 21 10:13:14.734985 systemd[1]: Started cri-containerd-af6421ca80f54bfdccad207679c7bfdad0adc6ce65de38659649bf8fe61824f3.scope - libcontainer container af6421ca80f54bfdccad207679c7bfdad0adc6ce65de38659649bf8fe61824f3. Apr 21 10:13:14.781355 containerd[1505]: time="2026-04-21T10:13:14.780690759Z" level=info msg="StartContainer for \"af6421ca80f54bfdccad207679c7bfdad0adc6ce65de38659649bf8fe61824f3\" returns successfully" Apr 21 10:13:14.888048 containerd[1505]: time="2026-04-21T10:13:14.887466530Z" level=info msg="StopPodSandbox for \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\"" Apr 21 10:13:14.890324 containerd[1505]: time="2026-04-21T10:13:14.889972840Z" level=info msg="StopPodSandbox for \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\"" Apr 21 10:13:14.891302 containerd[1505]: time="2026-04-21T10:13:14.891094795Z" level=info msg="StopPodSandbox for \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\"" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.015 [INFO][4646] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.017 [INFO][4646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" iface="eth0" netns="/var/run/netns/cni-29292515-9e3f-f301-a717-f43025f37443" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.017 [INFO][4646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" iface="eth0" netns="/var/run/netns/cni-29292515-9e3f-f301-a717-f43025f37443" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.017 [INFO][4646] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" iface="eth0" netns="/var/run/netns/cni-29292515-9e3f-f301-a717-f43025f37443" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.017 [INFO][4646] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.017 [INFO][4646] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.061 [INFO][4672] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" HandleID="k8s-pod-network.81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.063 [INFO][4672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.063 [INFO][4672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.073 [WARNING][4672] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" HandleID="k8s-pod-network.81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.073 [INFO][4672] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" HandleID="k8s-pod-network.81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.074 [INFO][4672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:15.085030 containerd[1505]: 2026-04-21 10:13:15.079 [INFO][4646] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:15.094892 containerd[1505]: time="2026-04-21T10:13:15.085124155Z" level=info msg="TearDown network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\" successfully" Apr 21 10:13:15.094892 containerd[1505]: time="2026-04-21T10:13:15.085153535Z" level=info msg="StopPodSandbox for \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\" returns successfully" Apr 21 10:13:15.094135 systemd[1]: run-netns-cni\x2d29292515\x2d9e3f\x2df301\x2da717\x2df43025f37443.mount: Deactivated successfully. Apr 21 10:13:15.098549 containerd[1505]: time="2026-04-21T10:13:15.097654881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c8cd7bf-5zwrh,Uid:563db82c-210b-4750-a813-d95c4fea43ab,Namespace:calico-system,Attempt:1,}" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.009 [INFO][4656] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.009 [INFO][4656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" iface="eth0" netns="/var/run/netns/cni-584fa1ef-f5f6-6ae5-9bb5-7988d310f8f4" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.010 [INFO][4656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" iface="eth0" netns="/var/run/netns/cni-584fa1ef-f5f6-6ae5-9bb5-7988d310f8f4" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.010 [INFO][4656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" iface="eth0" netns="/var/run/netns/cni-584fa1ef-f5f6-6ae5-9bb5-7988d310f8f4" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.010 [INFO][4656] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.010 [INFO][4656] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.071 [INFO][4669] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" HandleID="k8s-pod-network.759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.072 [INFO][4669] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.074 [INFO][4669] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.090 [WARNING][4669] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" HandleID="k8s-pod-network.759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.090 [INFO][4669] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" HandleID="k8s-pod-network.759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.091 [INFO][4669] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:15.105320 containerd[1505]: 2026-04-21 10:13:15.098 [INFO][4656] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:15.107202 containerd[1505]: time="2026-04-21T10:13:15.107164655Z" level=info msg="TearDown network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\" successfully" Apr 21 10:13:15.107310 containerd[1505]: time="2026-04-21T10:13:15.107296176Z" level=info msg="StopPodSandbox for \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\" returns successfully" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.015 [INFO][4652] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.015 [INFO][4652] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" iface="eth0" netns="/var/run/netns/cni-3ddddfcb-b065-a2ea-af10-e68c04aa14a7" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.016 [INFO][4652] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" iface="eth0" netns="/var/run/netns/cni-3ddddfcb-b065-a2ea-af10-e68c04aa14a7" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.017 [INFO][4652] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" iface="eth0" netns="/var/run/netns/cni-3ddddfcb-b065-a2ea-af10-e68c04aa14a7" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.017 [INFO][4652] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.017 [INFO][4652] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.078 [INFO][4673] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" HandleID="k8s-pod-network.729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.080 [INFO][4673] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.092 [INFO][4673] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.100 [WARNING][4673] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" HandleID="k8s-pod-network.729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.101 [INFO][4673] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" HandleID="k8s-pod-network.729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.102 [INFO][4673] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:15.109487 containerd[1505]: 2026-04-21 10:13:15.106 [INFO][4652] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:15.110150 containerd[1505]: time="2026-04-21T10:13:15.109612524Z" level=info msg="TearDown network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\" successfully" Apr 21 10:13:15.110150 containerd[1505]: time="2026-04-21T10:13:15.109625504Z" level=info msg="StopPodSandbox for \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\" returns successfully" Apr 21 10:13:15.113214 containerd[1505]: time="2026-04-21T10:13:15.113184187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df6fcc68c-6prn8,Uid:cf17b894-678a-46fa-83ff-56280e6c52d6,Namespace:calico-system,Attempt:1,}" Apr 21 10:13:15.113954 systemd[1]: run-netns-cni\x2d584fa1ef\x2df5f6\x2d6ae5\x2d9bb5\x2d7988d310f8f4.mount: Deactivated successfully. Apr 21 10:13:15.121650 systemd[1]: run-netns-cni\x2d3ddddfcb\x2db065\x2da2ea\x2daf10\x2de68c04aa14a7.mount: Deactivated successfully. Apr 21 10:13:15.124364 containerd[1505]: time="2026-04-21T10:13:15.122178700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n2ccb,Uid:df642170-a65e-4ca4-b9a5-68415ff77a88,Namespace:kube-system,Attempt:1,}" Apr 21 10:13:15.159726 kubelet[2567]: I0421 10:13:15.159671 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f7c8cd7bf-xtfdc" podStartSLOduration=26.653452074 podStartE2EDuration="29.159658437s" podCreationTimestamp="2026-04-21 10:12:46 +0000 UTC" firstStartedPulling="2026-04-21 10:13:12.17700475 +0000 UTC m=+43.381695989" lastFinishedPulling="2026-04-21 10:13:14.683211113 +0000 UTC m=+45.887902352" observedRunningTime="2026-04-21 10:13:15.156317664 +0000 UTC m=+46.361008903" watchObservedRunningTime="2026-04-21 10:13:15.159658437 +0000 UTC m=+46.364349676" Apr 21 10:13:15.317499 systemd-networkd[1416]: cali66358944cd7: Link UP Apr 21 10:13:15.318761 systemd-networkd[1416]: cali66358944cd7: Gained carrier Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.201 [INFO][4707] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0 coredns-66bc5c9577- kube-system df642170-a65e-4ca4-b9a5-68415ff77a88 985 0 2026-04-21 10:12:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-5-d97ac59edd coredns-66bc5c9577-n2ccb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali66358944cd7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Namespace="kube-system" Pod="coredns-66bc5c9577-n2ccb" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.202 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Namespace="kube-system" Pod="coredns-66bc5c9577-n2ccb" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.257 [INFO][4728] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" HandleID="k8s-pod-network.396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.268 [INFO][4728] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" HandleID="k8s-pod-network.396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-5-d97ac59edd", "pod":"coredns-66bc5c9577-n2ccb", "timestamp":"2026-04-21 10:13:15.257295791 +0000 UTC"}, Hostname:"ci-4081-3-7-5-d97ac59edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.268 [INFO][4728] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.268 [INFO][4728] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.268 [INFO][4728] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5-d97ac59edd' Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.271 [INFO][4728] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.277 [INFO][4728] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.282 [INFO][4728] ipam/ipam.go 526: Trying affinity for 192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.284 [INFO][4728] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.287 [INFO][4728] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.287 [INFO][4728] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.288 [INFO][4728] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576 Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.304 [INFO][4728] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.309 [INFO][4728] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.196/26] block=192.168.29.192/26 handle="k8s-pod-network.396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.309 [INFO][4728] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.196/26] handle="k8s-pod-network.396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.309 [INFO][4728] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:15.329657 containerd[1505]: 2026-04-21 10:13:15.309 [INFO][4728] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.196/26] IPv6=[] ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" HandleID="k8s-pod-network.396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.330261 containerd[1505]: 2026-04-21 10:13:15.314 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Namespace="kube-system" Pod="coredns-66bc5c9577-n2ccb" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"df642170-a65e-4ca4-b9a5-68415ff77a88", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"", Pod:"coredns-66bc5c9577-n2ccb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66358944cd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:15.330261 containerd[1505]: 2026-04-21 10:13:15.314 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.196/32] ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Namespace="kube-system" Pod="coredns-66bc5c9577-n2ccb" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.330261 containerd[1505]: 2026-04-21 10:13:15.314 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66358944cd7 ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Namespace="kube-system" Pod="coredns-66bc5c9577-n2ccb" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.330261 containerd[1505]: 2026-04-21 10:13:15.315 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Namespace="kube-system" Pod="coredns-66bc5c9577-n2ccb" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.330395 containerd[1505]: 2026-04-21 10:13:15.316 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Namespace="kube-system" Pod="coredns-66bc5c9577-n2ccb" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"df642170-a65e-4ca4-b9a5-68415ff77a88", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576", Pod:"coredns-66bc5c9577-n2ccb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66358944cd7", MAC:"8a:8e:f8:6a:15:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:15.330395 containerd[1505]: 2026-04-21 10:13:15.325 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576" Namespace="kube-system" Pod="coredns-66bc5c9577-n2ccb" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:15.368865 containerd[1505]: time="2026-04-21T10:13:15.368147665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:15.368865 containerd[1505]: time="2026-04-21T10:13:15.368188115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:15.368865 containerd[1505]: time="2026-04-21T10:13:15.368196325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.368865 containerd[1505]: time="2026-04-21T10:13:15.368257005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.402934 systemd[1]: Started cri-containerd-396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576.scope - libcontainer container 396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576. Apr 21 10:13:15.435033 systemd-networkd[1416]: cali54a48d2140c: Link UP Apr 21 10:13:15.437967 systemd-networkd[1416]: cali54a48d2140c: Gained carrier Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.186 [INFO][4689] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0 calico-apiserver-7f7c8cd7bf- calico-system 563db82c-210b-4750-a813-d95c4fea43ab 986 0 2026-04-21 10:12:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7c8cd7bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-5-d97ac59edd calico-apiserver-7f7c8cd7bf-5zwrh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali54a48d2140c [] [] }} ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-5zwrh" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.187 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-5zwrh" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.257 [INFO][4723] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" HandleID="k8s-pod-network.bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.268 [INFO][4723] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" HandleID="k8s-pod-network.bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ff00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5-d97ac59edd", "pod":"calico-apiserver-7f7c8cd7bf-5zwrh", "timestamp":"2026-04-21 10:13:15.257596172 +0000 UTC"}, Hostname:"ci-4081-3-7-5-d97ac59edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000e8580)} Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.268 [INFO][4723] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.309 [INFO][4723] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.310 [INFO][4723] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5-d97ac59edd' Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.371 [INFO][4723] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.380 [INFO][4723] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.385 [INFO][4723] ipam/ipam.go 526: Trying affinity for 192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.389 [INFO][4723] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.392 [INFO][4723] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.392 [INFO][4723] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.399 [INFO][4723] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.405 [INFO][4723] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.415 [INFO][4723] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.197/26] block=192.168.29.192/26 handle="k8s-pod-network.bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.416 [INFO][4723] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.197/26] handle="k8s-pod-network.bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.417 [INFO][4723] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:15.461503 containerd[1505]: 2026-04-21 10:13:15.417 [INFO][4723] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.197/26] IPv6=[] ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" HandleID="k8s-pod-network.bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.461955 containerd[1505]: 2026-04-21 10:13:15.421 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-5zwrh" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0", GenerateName:"calico-apiserver-7f7c8cd7bf-", Namespace:"calico-system", SelfLink:"", UID:"563db82c-210b-4750-a813-d95c4fea43ab", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c8cd7bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"", Pod:"calico-apiserver-7f7c8cd7bf-5zwrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali54a48d2140c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:15.461955 containerd[1505]: 2026-04-21 10:13:15.421 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.197/32] ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-5zwrh" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.461955 containerd[1505]: 2026-04-21 10:13:15.421 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54a48d2140c ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-5zwrh" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.461955 containerd[1505]: 2026-04-21 10:13:15.442 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-5zwrh" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.461955 containerd[1505]: 2026-04-21 10:13:15.444 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-5zwrh" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0", GenerateName:"calico-apiserver-7f7c8cd7bf-", Namespace:"calico-system", SelfLink:"", UID:"563db82c-210b-4750-a813-d95c4fea43ab", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c8cd7bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d", Pod:"calico-apiserver-7f7c8cd7bf-5zwrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali54a48d2140c", MAC:"76:6a:6d:17:b9:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:15.461955 containerd[1505]: 2026-04-21 10:13:15.457 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d" Namespace="calico-system" Pod="calico-apiserver-7f7c8cd7bf-5zwrh" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:15.465859 containerd[1505]: time="2026-04-21T10:13:15.465794991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n2ccb,Uid:df642170-a65e-4ca4-b9a5-68415ff77a88,Namespace:kube-system,Attempt:1,} returns sandbox id \"396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576\"" Apr 21 10:13:15.479832 containerd[1505]: time="2026-04-21T10:13:15.479774772Z" level=info msg="CreateContainer within sandbox \"396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:13:15.505302 containerd[1505]: time="2026-04-21T10:13:15.504992933Z" level=info msg="CreateContainer within sandbox \"396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90ee0741962d50560668b3e5f6ecbbbb0ba9baa71ad41c4620e3a289d5cd755b\"" Apr 21 10:13:15.505759 containerd[1505]: time="2026-04-21T10:13:15.505662076Z" level=info msg="StartContainer for \"90ee0741962d50560668b3e5f6ecbbbb0ba9baa71ad41c4620e3a289d5cd755b\"" Apr 21 10:13:15.529332 containerd[1505]: time="2026-04-21T10:13:15.526528961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:15.529332 containerd[1505]: time="2026-04-21T10:13:15.526904184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:15.529332 containerd[1505]: time="2026-04-21T10:13:15.526913874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.529332 containerd[1505]: time="2026-04-21T10:13:15.527026204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.559945 systemd[1]: Started cri-containerd-bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d.scope - libcontainer container bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d. Apr 21 10:13:15.561037 systemd-networkd[1416]: calif567db58b11: Link UP Apr 21 10:13:15.564215 systemd-networkd[1416]: calif567db58b11: Gained carrier Apr 21 10:13:15.582929 systemd[1]: Started cri-containerd-90ee0741962d50560668b3e5f6ecbbbb0ba9baa71ad41c4620e3a289d5cd755b.scope - libcontainer container 90ee0741962d50560668b3e5f6ecbbbb0ba9baa71ad41c4620e3a289d5cd755b. Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.227 [INFO][4704] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0 calico-kube-controllers-df6fcc68c- calico-system cf17b894-678a-46fa-83ff-56280e6c52d6 984 0 2026-04-21 10:12:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:df6fcc68c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-5-d97ac59edd calico-kube-controllers-df6fcc68c-6prn8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif567db58b11 [] [] }} ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Namespace="calico-system" Pod="calico-kube-controllers-df6fcc68c-6prn8" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.227 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Namespace="calico-system" Pod="calico-kube-controllers-df6fcc68c-6prn8" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.293 [INFO][4735] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" HandleID="k8s-pod-network.1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.303 [INFO][4735] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" HandleID="k8s-pod-network.1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d3700), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5-d97ac59edd", "pod":"calico-kube-controllers-df6fcc68c-6prn8", "timestamp":"2026-04-21 10:13:15.293024342 +0000 UTC"}, Hostname:"ci-4081-3-7-5-d97ac59edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000122dc0)} Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.303 [INFO][4735] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.416 [INFO][4735] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.416 [INFO][4735] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5-d97ac59edd' Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.472 [INFO][4735] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.479 [INFO][4735] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.493 [INFO][4735] ipam/ipam.go 526: Trying affinity for 192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.497 [INFO][4735] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.502 [INFO][4735] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.502 [INFO][4735] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.505 [INFO][4735] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220 Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.520 [INFO][4735] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.533 [INFO][4735] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.198/26] block=192.168.29.192/26 handle="k8s-pod-network.1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.534 [INFO][4735] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.198/26] handle="k8s-pod-network.1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.534 [INFO][4735] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:15.593559 containerd[1505]: 2026-04-21 10:13:15.534 [INFO][4735] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.198/26] IPv6=[] ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" HandleID="k8s-pod-network.1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.595321 containerd[1505]: 2026-04-21 10:13:15.547 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Namespace="calico-system" Pod="calico-kube-controllers-df6fcc68c-6prn8" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0", GenerateName:"calico-kube-controllers-df6fcc68c-", Namespace:"calico-system", SelfLink:"", UID:"cf17b894-678a-46fa-83ff-56280e6c52d6", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df6fcc68c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"", Pod:"calico-kube-controllers-df6fcc68c-6prn8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif567db58b11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:15.595321 containerd[1505]: 2026-04-21 10:13:15.547 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.198/32] ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Namespace="calico-system" Pod="calico-kube-controllers-df6fcc68c-6prn8" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.595321 containerd[1505]: 2026-04-21 10:13:15.547 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif567db58b11 ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Namespace="calico-system" Pod="calico-kube-controllers-df6fcc68c-6prn8" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.595321 containerd[1505]: 2026-04-21 10:13:15.570 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Namespace="calico-system" Pod="calico-kube-controllers-df6fcc68c-6prn8" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.595321 containerd[1505]: 2026-04-21 10:13:15.577 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Namespace="calico-system" Pod="calico-kube-controllers-df6fcc68c-6prn8" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0", GenerateName:"calico-kube-controllers-df6fcc68c-", Namespace:"calico-system", SelfLink:"", UID:"cf17b894-678a-46fa-83ff-56280e6c52d6", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df6fcc68c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220", Pod:"calico-kube-controllers-df6fcc68c-6prn8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif567db58b11", MAC:"ea:c3:c7:28:dd:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:15.595321 containerd[1505]: 2026-04-21 10:13:15.588 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220" Namespace="calico-system" Pod="calico-kube-controllers-df6fcc68c-6prn8" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:15.625997 containerd[1505]: time="2026-04-21T10:13:15.624599989Z" level=info msg="StartContainer for \"90ee0741962d50560668b3e5f6ecbbbb0ba9baa71ad41c4620e3a289d5cd755b\" returns successfully" Apr 21 10:13:15.638346 containerd[1505]: time="2026-04-21T10:13:15.638231559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:15.638346 containerd[1505]: time="2026-04-21T10:13:15.638296899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:15.638346 containerd[1505]: time="2026-04-21T10:13:15.638305579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.638603 containerd[1505]: time="2026-04-21T10:13:15.638402739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:15.667943 systemd[1]: Started cri-containerd-1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220.scope - libcontainer container 1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220. Apr 21 10:13:15.677713 containerd[1505]: time="2026-04-21T10:13:15.677669092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7c8cd7bf-5zwrh,Uid:563db82c-210b-4750-a813-d95c4fea43ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d\"" Apr 21 10:13:15.685949 containerd[1505]: time="2026-04-21T10:13:15.685918152Z" level=info msg="CreateContainer within sandbox \"bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:13:15.705404 containerd[1505]: time="2026-04-21T10:13:15.705350523Z" level=info msg="CreateContainer within sandbox \"bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"483c006b6acf4cd43f3bc98e1c4899b04ecf37129ea868dc8ffc9c8da275c7a7\"" Apr 21 10:13:15.706286 containerd[1505]: time="2026-04-21T10:13:15.706258546Z" level=info msg="StartContainer for \"483c006b6acf4cd43f3bc98e1c4899b04ecf37129ea868dc8ffc9c8da275c7a7\"" Apr 21 10:13:15.764985 systemd[1]: Started cri-containerd-483c006b6acf4cd43f3bc98e1c4899b04ecf37129ea868dc8ffc9c8da275c7a7.scope - libcontainer container 483c006b6acf4cd43f3bc98e1c4899b04ecf37129ea868dc8ffc9c8da275c7a7. Apr 21 10:13:15.818284 containerd[1505]: time="2026-04-21T10:13:15.818249824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-df6fcc68c-6prn8,Uid:cf17b894-678a-46fa-83ff-56280e6c52d6,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220\"" Apr 21 10:13:15.884682 containerd[1505]: time="2026-04-21T10:13:15.884577986Z" level=info msg="StopPodSandbox for \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\"" Apr 21 10:13:15.888167 containerd[1505]: time="2026-04-21T10:13:15.887954818Z" level=info msg="StopPodSandbox for \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\"" Apr 21 10:13:15.888463 containerd[1505]: time="2026-04-21T10:13:15.888201439Z" level=info msg="StartContainer for \"483c006b6acf4cd43f3bc98e1c4899b04ecf37129ea868dc8ffc9c8da275c7a7\" returns successfully" Apr 21 10:13:15.919000 systemd-networkd[1416]: cali7409c8746b7: Gained IPv6LL Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:15.974 [INFO][5012] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:15.975 [INFO][5012] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" iface="eth0" netns="/var/run/netns/cni-e5ab0d58-6553-14b2-7bdc-1ae49b45f404" Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:15.975 [INFO][5012] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" iface="eth0" netns="/var/run/netns/cni-e5ab0d58-6553-14b2-7bdc-1ae49b45f404" Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:15.976 [INFO][5012] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" iface="eth0" netns="/var/run/netns/cni-e5ab0d58-6553-14b2-7bdc-1ae49b45f404" Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:15.976 [INFO][5012] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:15.976 [INFO][5012] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:16.028 [INFO][5033] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" HandleID="k8s-pod-network.a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:16.028 [INFO][5033] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:16.028 [INFO][5033] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:16.040 [WARNING][5033] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" HandleID="k8s-pod-network.a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:16.041 [INFO][5033] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" HandleID="k8s-pod-network.a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:16.043 [INFO][5033] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:16.050632 containerd[1505]: 2026-04-21 10:13:16.047 [INFO][5012] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:16.053071 containerd[1505]: time="2026-04-21T10:13:16.052955188Z" level=info msg="TearDown network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\" successfully" Apr 21 10:13:16.053071 containerd[1505]: time="2026-04-21T10:13:16.052979838Z" level=info msg="StopPodSandbox for \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\" returns successfully" Apr 21 10:13:16.055097 systemd[1]: run-netns-cni\x2de5ab0d58\x2d6553\x2d14b2\x2d7bdc\x2d1ae49b45f404.mount: Deactivated successfully. Apr 21 10:13:16.060017 containerd[1505]: time="2026-04-21T10:13:16.058740648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9tq,Uid:45c6765d-bd67-4624-ba22-eae28e77978f,Namespace:calico-system,Attempt:1,}" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:15.976 [INFO][5013] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:15.976 [INFO][5013] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" iface="eth0" netns="/var/run/netns/cni-50bbce9f-01d2-6427-d1c5-3d38b2b70bb2" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:15.976 [INFO][5013] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" iface="eth0" netns="/var/run/netns/cni-50bbce9f-01d2-6427-d1c5-3d38b2b70bb2" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:15.977 [INFO][5013] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" iface="eth0" netns="/var/run/netns/cni-50bbce9f-01d2-6427-d1c5-3d38b2b70bb2" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:15.977 [INFO][5013] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:15.977 [INFO][5013] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:16.032 [INFO][5035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" HandleID="k8s-pod-network.280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:16.034 [INFO][5035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:16.043 [INFO][5035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:16.052 [WARNING][5035] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" HandleID="k8s-pod-network.280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:16.053 [INFO][5035] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" HandleID="k8s-pod-network.280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:16.056 [INFO][5035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:16.064640 containerd[1505]: 2026-04-21 10:13:16.060 [INFO][5013] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:16.065405 containerd[1505]: time="2026-04-21T10:13:16.065375221Z" level=info msg="TearDown network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\" successfully" Apr 21 10:13:16.065759 containerd[1505]: time="2026-04-21T10:13:16.065743042Z" level=info msg="StopPodSandbox for \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\" returns successfully" Apr 21 10:13:16.069808 systemd[1]: run-netns-cni\x2d50bbce9f\x2d01d2\x2d6427\x2dd1c5\x2d3d38b2b70bb2.mount: Deactivated successfully. Apr 21 10:13:16.070737 containerd[1505]: time="2026-04-21T10:13:16.070719399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9qxs6,Uid:b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c,Namespace:kube-system,Attempt:1,}" Apr 21 10:13:16.150654 kubelet[2567]: I0421 10:13:16.150631 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:13:16.188700 kubelet[2567]: I0421 10:13:16.187361 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n2ccb" podStartSLOduration=40.187333001 podStartE2EDuration="40.187333001s" podCreationTimestamp="2026-04-21 10:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:13:16.162743155 +0000 UTC m=+47.367434384" watchObservedRunningTime="2026-04-21 10:13:16.187333001 +0000 UTC m=+47.392024230" Apr 21 10:13:16.188700 kubelet[2567]: I0421 10:13:16.187490 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f7c8cd7bf-5zwrh" podStartSLOduration=30.187485771 podStartE2EDuration="30.187485771s" podCreationTimestamp="2026-04-21 10:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:13:16.185075823 +0000 UTC m=+47.389767052" watchObservedRunningTime="2026-04-21 10:13:16.187485771 +0000 UTC m=+47.392177000" Apr 21 10:13:16.272624 systemd-networkd[1416]: calibc6c5ad985b: Link UP Apr 21 10:13:16.272914 systemd-networkd[1416]: calibc6c5ad985b: Gained carrier Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.134 [INFO][5051] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0 csi-node-driver- calico-system 45c6765d-bd67-4624-ba22-eae28e77978f 1009 0 2026-04-21 10:12:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-5-d97ac59edd csi-node-driver-jm9tq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibc6c5ad985b [] [] }} ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Namespace="calico-system" Pod="csi-node-driver-jm9tq" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.135 [INFO][5051] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Namespace="calico-system" Pod="csi-node-driver-jm9tq" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.177 [INFO][5074] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" HandleID="k8s-pod-network.535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.186 [INFO][5074] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" HandleID="k8s-pod-network.535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5-d97ac59edd", "pod":"csi-node-driver-jm9tq", "timestamp":"2026-04-21 10:13:16.177200075 +0000 UTC"}, Hostname:"ci-4081-3-7-5-d97ac59edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000377080)} Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.186 [INFO][5074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.186 [INFO][5074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.186 [INFO][5074] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5-d97ac59edd' Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.197 [INFO][5074] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.205 [INFO][5074] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.215 [INFO][5074] ipam/ipam.go 526: Trying affinity for 192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.217 [INFO][5074] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.220 [INFO][5074] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.220 [INFO][5074] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.224 [INFO][5074] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.236 [INFO][5074] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.261 [INFO][5074] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.199/26] block=192.168.29.192/26 handle="k8s-pod-network.535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.261 [INFO][5074] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.199/26] handle="k8s-pod-network.535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.261 [INFO][5074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:16.301929 containerd[1505]: 2026-04-21 10:13:16.261 [INFO][5074] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.199/26] IPv6=[] ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" HandleID="k8s-pod-network.535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.302414 containerd[1505]: 2026-04-21 10:13:16.267 [INFO][5051] cni-plugin/k8s.go 418: Populated endpoint ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Namespace="calico-system" Pod="csi-node-driver-jm9tq" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45c6765d-bd67-4624-ba22-eae28e77978f", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"", Pod:"csi-node-driver-jm9tq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc6c5ad985b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:16.302414 containerd[1505]: 2026-04-21 10:13:16.267 [INFO][5051] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.199/32] ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Namespace="calico-system" Pod="csi-node-driver-jm9tq" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.302414 containerd[1505]: 2026-04-21 10:13:16.267 [INFO][5051] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc6c5ad985b ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Namespace="calico-system" Pod="csi-node-driver-jm9tq" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.302414 containerd[1505]: 2026-04-21 10:13:16.280 [INFO][5051] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Namespace="calico-system" Pod="csi-node-driver-jm9tq" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.302414 containerd[1505]: 2026-04-21 10:13:16.280 [INFO][5051] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Namespace="calico-system" Pod="csi-node-driver-jm9tq" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45c6765d-bd67-4624-ba22-eae28e77978f", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f", Pod:"csi-node-driver-jm9tq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc6c5ad985b", MAC:"22:6c:49:2e:72:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:16.302414 containerd[1505]: 2026-04-21 10:13:16.298 [INFO][5051] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f" Namespace="calico-system" Pod="csi-node-driver-jm9tq" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:16.350378 containerd[1505]: time="2026-04-21T10:13:16.347205590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:16.350378 containerd[1505]: time="2026-04-21T10:13:16.348465745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:16.350378 containerd[1505]: time="2026-04-21T10:13:16.348480475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:16.350378 containerd[1505]: time="2026-04-21T10:13:16.349287537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:16.351247 systemd-networkd[1416]: cali71bca3af798: Link UP Apr 21 10:13:16.353913 systemd-networkd[1416]: cali71bca3af798: Gained carrier Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.171 [INFO][5061] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0 coredns-66bc5c9577- kube-system b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c 1010 0 2026-04-21 10:12:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-5-d97ac59edd coredns-66bc5c9577-9qxs6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71bca3af798 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Namespace="kube-system" Pod="coredns-66bc5c9577-9qxs6" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.171 [INFO][5061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Namespace="kube-system" Pod="coredns-66bc5c9577-9qxs6" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.241 [INFO][5083] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" HandleID="k8s-pod-network.d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.264 [INFO][5083] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" HandleID="k8s-pod-network.d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-5-d97ac59edd", "pod":"coredns-66bc5c9577-9qxs6", "timestamp":"2026-04-21 10:13:16.241442206 +0000 UTC"}, Hostname:"ci-4081-3-7-5-d97ac59edd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188dc0)} Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.264 [INFO][5083] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.264 [INFO][5083] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.264 [INFO][5083] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5-d97ac59edd' Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.293 [INFO][5083] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.307 [INFO][5083] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.315 [INFO][5083] ipam/ipam.go 526: Trying affinity for 192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.319 [INFO][5083] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.321 [INFO][5083] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.192/26 host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.321 [INFO][5083] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.192/26 handle="k8s-pod-network.d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.325 [INFO][5083] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.329 [INFO][5083] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.192/26 handle="k8s-pod-network.d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.338 [INFO][5083] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.200/26] block=192.168.29.192/26 handle="k8s-pod-network.d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.338 [INFO][5083] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.200/26] handle="k8s-pod-network.d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" host="ci-4081-3-7-5-d97ac59edd" Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.338 [INFO][5083] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:16.374784 containerd[1505]: 2026-04-21 10:13:16.338 [INFO][5083] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.200/26] IPv6=[] ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" HandleID="k8s-pod-network.d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.375352 containerd[1505]: 2026-04-21 10:13:16.346 [INFO][5061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Namespace="kube-system" Pod="coredns-66bc5c9577-9qxs6" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"", Pod:"coredns-66bc5c9577-9qxs6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71bca3af798", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:16.375352 containerd[1505]: 2026-04-21 10:13:16.346 [INFO][5061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.200/32] ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Namespace="kube-system" Pod="coredns-66bc5c9577-9qxs6" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.375352 containerd[1505]: 2026-04-21 10:13:16.346 [INFO][5061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71bca3af798 ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Namespace="kube-system" Pod="coredns-66bc5c9577-9qxs6" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.375352 containerd[1505]: 2026-04-21 10:13:16.355 [INFO][5061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Namespace="kube-system" Pod="coredns-66bc5c9577-9qxs6" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.375446 containerd[1505]: 2026-04-21 10:13:16.356 [INFO][5061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Namespace="kube-system" Pod="coredns-66bc5c9577-9qxs6" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f", Pod:"coredns-66bc5c9577-9qxs6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71bca3af798", MAC:"0e:8c:f3:bf:45:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:16.375446 containerd[1505]: 2026-04-21 10:13:16.370 [INFO][5061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f" Namespace="kube-system" Pod="coredns-66bc5c9577-9qxs6" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:16.381950 systemd[1]: Started cri-containerd-535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f.scope - libcontainer container 535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f. Apr 21 10:13:16.411159 containerd[1505]: time="2026-04-21T10:13:16.410055496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:13:16.411159 containerd[1505]: time="2026-04-21T10:13:16.410148086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:13:16.411159 containerd[1505]: time="2026-04-21T10:13:16.410158866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:16.411336 containerd[1505]: time="2026-04-21T10:13:16.410362007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:13:16.450943 systemd[1]: Started cri-containerd-d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f.scope - libcontainer container d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f. Apr 21 10:13:16.457994 containerd[1505]: time="2026-04-21T10:13:16.457399069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm9tq,Uid:45c6765d-bd67-4624-ba22-eae28e77978f,Namespace:calico-system,Attempt:1,} returns sandbox id \"535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f\"" Apr 21 10:13:16.509567 containerd[1505]: time="2026-04-21T10:13:16.509070047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9qxs6,Uid:b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c,Namespace:kube-system,Attempt:1,} returns sandbox id \"d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f\"" Apr 21 10:13:16.516465 containerd[1505]: time="2026-04-21T10:13:16.516267422Z" level=info msg="CreateContainer within sandbox \"d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:13:16.530782 containerd[1505]: time="2026-04-21T10:13:16.530743402Z" level=info msg="CreateContainer within sandbox \"d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b90a90f3a6140fdd0cc57ef4a759743b1dfa07546d4d471f8a8935f7949ba320\"" Apr 21 10:13:16.531843 containerd[1505]: time="2026-04-21T10:13:16.531347373Z" level=info msg="StartContainer for \"b90a90f3a6140fdd0cc57ef4a759743b1dfa07546d4d471f8a8935f7949ba320\"" Apr 21 10:13:16.565391 systemd[1]: Started cri-containerd-b90a90f3a6140fdd0cc57ef4a759743b1dfa07546d4d471f8a8935f7949ba320.scope - libcontainer container b90a90f3a6140fdd0cc57ef4a759743b1dfa07546d4d471f8a8935f7949ba320. Apr 21 10:13:16.600167 containerd[1505]: time="2026-04-21T10:13:16.599961299Z" level=info msg="StartContainer for \"b90a90f3a6140fdd0cc57ef4a759743b1dfa07546d4d471f8a8935f7949ba320\" returns successfully" Apr 21 10:13:16.623073 systemd-networkd[1416]: calif567db58b11: Gained IPv6LL Apr 21 10:13:16.816476 systemd-networkd[1416]: cali66358944cd7: Gained IPv6LL Apr 21 10:13:16.943903 systemd-networkd[1416]: cali54a48d2140c: Gained IPv6LL Apr 21 10:13:17.171463 kubelet[2567]: I0421 10:13:17.170734 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:13:17.196050 kubelet[2567]: I0421 10:13:17.196000 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9qxs6" podStartSLOduration=41.195970432 podStartE2EDuration="41.195970432s" podCreationTimestamp="2026-04-21 10:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:13:17.185204947 +0000 UTC m=+48.389896176" watchObservedRunningTime="2026-04-21 10:13:17.195970432 +0000 UTC m=+48.400661671" Apr 21 10:13:17.839084 systemd-networkd[1416]: calibc6c5ad985b: Gained IPv6LL Apr 21 10:13:18.287015 systemd-networkd[1416]: cali71bca3af798: Gained IPv6LL Apr 21 10:13:18.548456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087559373.mount: Deactivated successfully. Apr 21 10:13:18.833040 containerd[1505]: time="2026-04-21T10:13:18.832789614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:18.834257 containerd[1505]: time="2026-04-21T10:13:18.834152319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:13:18.835313 containerd[1505]: time="2026-04-21T10:13:18.835191711Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:18.837425 containerd[1505]: time="2026-04-21T10:13:18.837389148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:18.838061 containerd[1505]: time="2026-04-21T10:13:18.838034781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.154264266s" Apr 21 10:13:18.838116 containerd[1505]: time="2026-04-21T10:13:18.838066091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:13:18.839796 containerd[1505]: time="2026-04-21T10:13:18.839656105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:13:18.842953 containerd[1505]: time="2026-04-21T10:13:18.842430163Z" level=info msg="CreateContainer within sandbox \"a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:13:18.860483 containerd[1505]: time="2026-04-21T10:13:18.860452269Z" level=info msg="CreateContainer within sandbox \"a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"541a2c5353774100cf2144a0659194fcecf324d78f9cc9c7161ba17504f6f33f\"" Apr 21 10:13:18.861053 containerd[1505]: time="2026-04-21T10:13:18.861033961Z" level=info msg="StartContainer for \"541a2c5353774100cf2144a0659194fcecf324d78f9cc9c7161ba17504f6f33f\"" Apr 21 10:13:18.887926 systemd[1]: Started cri-containerd-541a2c5353774100cf2144a0659194fcecf324d78f9cc9c7161ba17504f6f33f.scope - libcontainer container 541a2c5353774100cf2144a0659194fcecf324d78f9cc9c7161ba17504f6f33f. Apr 21 10:13:18.924948 containerd[1505]: time="2026-04-21T10:13:18.924803437Z" level=info msg="StartContainer for \"541a2c5353774100cf2144a0659194fcecf324d78f9cc9c7161ba17504f6f33f\" returns successfully" Apr 21 10:13:19.383889 kubelet[2567]: I0421 10:13:19.382971 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:13:19.410369 kubelet[2567]: I0421 10:13:19.410215 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-8vcnx" podStartSLOduration=28.849037657 podStartE2EDuration="33.410193041s" podCreationTimestamp="2026-04-21 10:12:46 +0000 UTC" firstStartedPulling="2026-04-21 10:13:14.27791439 +0000 UTC m=+45.482605629" lastFinishedPulling="2026-04-21 10:13:18.839069784 +0000 UTC m=+50.043761013" observedRunningTime="2026-04-21 10:13:19.203285679 +0000 UTC m=+50.407976938" watchObservedRunningTime="2026-04-21 10:13:19.410193041 +0000 UTC m=+50.614884320" Apr 21 10:13:20.195562 systemd[1]: run-containerd-runc-k8s.io-541a2c5353774100cf2144a0659194fcecf324d78f9cc9c7161ba17504f6f33f-runc.NcbHgl.mount: Deactivated successfully. Apr 21 10:13:21.191891 containerd[1505]: time="2026-04-21T10:13:21.191749317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:21.192930 containerd[1505]: time="2026-04-21T10:13:21.192734439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:13:21.194615 containerd[1505]: time="2026-04-21T10:13:21.193768162Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:21.195525 containerd[1505]: time="2026-04-21T10:13:21.195300426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:21.196329 containerd[1505]: time="2026-04-21T10:13:21.195949268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.356270473s" Apr 21 10:13:21.196329 containerd[1505]: time="2026-04-21T10:13:21.195977438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:13:21.198114 containerd[1505]: time="2026-04-21T10:13:21.198097644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:13:21.209655 containerd[1505]: time="2026-04-21T10:13:21.209605673Z" level=info msg="CreateContainer within sandbox \"1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:13:21.222545 containerd[1505]: time="2026-04-21T10:13:21.222444617Z" level=info msg="CreateContainer within sandbox \"1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e8adea2ca6dee0ce270d558dcae9f682b2c7c5170bd7a1ddbf9e5d8986f02553\"" Apr 21 10:13:21.223635 containerd[1505]: time="2026-04-21T10:13:21.223071938Z" level=info msg="StartContainer for \"e8adea2ca6dee0ce270d558dcae9f682b2c7c5170bd7a1ddbf9e5d8986f02553\"" Apr 21 10:13:21.255991 systemd[1]: Started cri-containerd-e8adea2ca6dee0ce270d558dcae9f682b2c7c5170bd7a1ddbf9e5d8986f02553.scope - libcontainer container e8adea2ca6dee0ce270d558dcae9f682b2c7c5170bd7a1ddbf9e5d8986f02553. Apr 21 10:13:21.293847 containerd[1505]: time="2026-04-21T10:13:21.293244011Z" level=info msg="StartContainer for \"e8adea2ca6dee0ce270d558dcae9f682b2c7c5170bd7a1ddbf9e5d8986f02553\" returns successfully" Apr 21 10:13:22.209455 kubelet[2567]: I0421 10:13:22.208962 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-df6fcc68c-6prn8" podStartSLOduration=29.832088199 podStartE2EDuration="35.208948288s" podCreationTimestamp="2026-04-21 10:12:47 +0000 UTC" firstStartedPulling="2026-04-21 10:13:15.819744 +0000 UTC m=+47.024435239" lastFinishedPulling="2026-04-21 10:13:21.196604089 +0000 UTC m=+52.401295328" observedRunningTime="2026-04-21 10:13:22.208606577 +0000 UTC m=+53.413297806" watchObservedRunningTime="2026-04-21 10:13:22.208948288 +0000 UTC m=+53.413639517" Apr 21 10:13:23.066027 containerd[1505]: time="2026-04-21T10:13:23.065921274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:23.067936 containerd[1505]: time="2026-04-21T10:13:23.067449968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:13:23.070057 containerd[1505]: time="2026-04-21T10:13:23.070005484Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:23.073017 containerd[1505]: time="2026-04-21T10:13:23.072998911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:23.073558 containerd[1505]: time="2026-04-21T10:13:23.073437783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.875237029s" Apr 21 10:13:23.073558 containerd[1505]: time="2026-04-21T10:13:23.073460193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:13:23.078544 containerd[1505]: time="2026-04-21T10:13:23.078515545Z" level=info msg="CreateContainer within sandbox \"535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:13:23.096653 containerd[1505]: time="2026-04-21T10:13:23.096618236Z" level=info msg="CreateContainer within sandbox \"535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"713106d89c767eaeee4293907b6720c9eb0e484b4b84a5445fedfdf8faacbc4d\"" Apr 21 10:13:23.097130 containerd[1505]: time="2026-04-21T10:13:23.097089617Z" level=info msg="StartContainer for \"713106d89c767eaeee4293907b6720c9eb0e484b4b84a5445fedfdf8faacbc4d\"" Apr 21 10:13:23.128599 systemd[1]: Started cri-containerd-713106d89c767eaeee4293907b6720c9eb0e484b4b84a5445fedfdf8faacbc4d.scope - libcontainer container 713106d89c767eaeee4293907b6720c9eb0e484b4b84a5445fedfdf8faacbc4d. Apr 21 10:13:23.154417 containerd[1505]: time="2026-04-21T10:13:23.154378281Z" level=info msg="StartContainer for \"713106d89c767eaeee4293907b6720c9eb0e484b4b84a5445fedfdf8faacbc4d\" returns successfully" Apr 21 10:13:23.156052 containerd[1505]: time="2026-04-21T10:13:23.156032376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:13:25.045309 containerd[1505]: time="2026-04-21T10:13:25.045254877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:25.046480 containerd[1505]: time="2026-04-21T10:13:25.046365359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:13:25.047614 containerd[1505]: time="2026-04-21T10:13:25.047531662Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:25.049842 containerd[1505]: time="2026-04-21T10:13:25.049799327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:13:25.050302 containerd[1505]: time="2026-04-21T10:13:25.050271157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.894213491s" Apr 21 10:13:25.050334 containerd[1505]: time="2026-04-21T10:13:25.050305988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:13:25.054599 containerd[1505]: time="2026-04-21T10:13:25.054556577Z" level=info msg="CreateContainer within sandbox \"535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:13:25.068729 containerd[1505]: time="2026-04-21T10:13:25.068678027Z" level=info msg="CreateContainer within sandbox \"535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8141edeb92aff0fdd1f751bd7e32f4368a546afe4abc870ec233bcdb5f1faa97\"" Apr 21 10:13:25.070039 containerd[1505]: time="2026-04-21T10:13:25.069112307Z" level=info msg="StartContainer for \"8141edeb92aff0fdd1f751bd7e32f4368a546afe4abc870ec233bcdb5f1faa97\"" Apr 21 10:13:25.102919 systemd[1]: Started cri-containerd-8141edeb92aff0fdd1f751bd7e32f4368a546afe4abc870ec233bcdb5f1faa97.scope - libcontainer container 8141edeb92aff0fdd1f751bd7e32f4368a546afe4abc870ec233bcdb5f1faa97. Apr 21 10:13:25.131600 containerd[1505]: time="2026-04-21T10:13:25.131555199Z" level=info msg="StartContainer for \"8141edeb92aff0fdd1f751bd7e32f4368a546afe4abc870ec233bcdb5f1faa97\" returns successfully" Apr 21 10:13:25.210901 kubelet[2567]: I0421 10:13:25.210856 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jm9tq" podStartSLOduration=29.619522654 podStartE2EDuration="38.210841316s" podCreationTimestamp="2026-04-21 10:12:47 +0000 UTC" firstStartedPulling="2026-04-21 10:13:16.459872437 +0000 UTC m=+47.664563666" lastFinishedPulling="2026-04-21 10:13:25.051191099 +0000 UTC m=+56.255882328" observedRunningTime="2026-04-21 10:13:25.209706533 +0000 UTC m=+56.414397772" watchObservedRunningTime="2026-04-21 10:13:25.210841316 +0000 UTC m=+56.415532545" Apr 21 10:13:25.979300 kubelet[2567]: I0421 10:13:25.979071 2567 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:13:25.981350 kubelet[2567]: I0421 10:13:25.981286 2567 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:13:28.876172 containerd[1505]: time="2026-04-21T10:13:28.876107188Z" level=info msg="StopPodSandbox for \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\"" Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.936 [WARNING][5553] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0", GenerateName:"calico-apiserver-7f7c8cd7bf-", Namespace:"calico-system", SelfLink:"", UID:"421160f3-b971-4667-88a9-535bf4abfe5b", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c8cd7bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0", Pod:"calico-apiserver-7f7c8cd7bf-xtfdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicfd38b856da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.936 [INFO][5553] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.936 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" iface="eth0" netns="" Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.936 [INFO][5553] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.936 [INFO][5553] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.962 [INFO][5562] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" HandleID="k8s-pod-network.7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.963 [INFO][5562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.963 [INFO][5562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.967 [WARNING][5562] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" HandleID="k8s-pod-network.7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.967 [INFO][5562] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" HandleID="k8s-pod-network.7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.969 [INFO][5562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:28.973008 containerd[1505]: 2026-04-21 10:13:28.970 [INFO][5553] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:28.973328 containerd[1505]: time="2026-04-21T10:13:28.973022424Z" level=info msg="TearDown network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\" successfully" Apr 21 10:13:28.973328 containerd[1505]: time="2026-04-21T10:13:28.973040514Z" level=info msg="StopPodSandbox for \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\" returns successfully" Apr 21 10:13:28.973752 containerd[1505]: time="2026-04-21T10:13:28.973511185Z" level=info msg="RemovePodSandbox for \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\"" Apr 21 10:13:28.973752 containerd[1505]: time="2026-04-21T10:13:28.973532575Z" level=info msg="Forcibly stopping sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\"" Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:28.998 [WARNING][5576] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0", GenerateName:"calico-apiserver-7f7c8cd7bf-", Namespace:"calico-system", SelfLink:"", UID:"421160f3-b971-4667-88a9-535bf4abfe5b", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c8cd7bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"6ab64ad3fe0723544c29210de4f46992f4c4c11fac14d3cc1d8d61b4b59612b0", Pod:"calico-apiserver-7f7c8cd7bf-xtfdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicfd38b856da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:28.998 [INFO][5576] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:28.998 [INFO][5576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" iface="eth0" netns="" Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:28.998 [INFO][5576] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:28.998 [INFO][5576] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:29.013 [INFO][5584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" HandleID="k8s-pod-network.7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:29.013 [INFO][5584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:29.014 [INFO][5584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:29.018 [WARNING][5584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" HandleID="k8s-pod-network.7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:29.018 [INFO][5584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" HandleID="k8s-pod-network.7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--xtfdc-eth0" Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:29.019 [INFO][5584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.023903 containerd[1505]: 2026-04-21 10:13:29.021 [INFO][5576] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b" Apr 21 10:13:29.023903 containerd[1505]: time="2026-04-21T10:13:29.022998722Z" level=info msg="TearDown network for sandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\" successfully" Apr 21 10:13:29.028092 containerd[1505]: time="2026-04-21T10:13:29.028048921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:13:29.028142 containerd[1505]: time="2026-04-21T10:13:29.028101261Z" level=info msg="RemovePodSandbox \"7dfccacddcd52e4a7e7b6c315e134138cb21e8b5c1147a0334e8186736968e7b\" returns successfully" Apr 21 10:13:29.028575 containerd[1505]: time="2026-04-21T10:13:29.028553042Z" level=info msg="StopPodSandbox for \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\"" Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.053 [WARNING][5598] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0", GenerateName:"calico-kube-controllers-df6fcc68c-", Namespace:"calico-system", SelfLink:"", UID:"cf17b894-678a-46fa-83ff-56280e6c52d6", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df6fcc68c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220", Pod:"calico-kube-controllers-df6fcc68c-6prn8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif567db58b11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.053 [INFO][5598] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.053 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" iface="eth0" netns="" Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.053 [INFO][5598] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.053 [INFO][5598] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.069 [INFO][5606] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" HandleID="k8s-pod-network.759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.069 [INFO][5606] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.069 [INFO][5606] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.073 [WARNING][5606] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" HandleID="k8s-pod-network.759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.073 [INFO][5606] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" HandleID="k8s-pod-network.759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.074 [INFO][5606] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.078348 containerd[1505]: 2026-04-21 10:13:29.076 [INFO][5598] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:29.078721 containerd[1505]: time="2026-04-21T10:13:29.078378048Z" level=info msg="TearDown network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\" successfully" Apr 21 10:13:29.078721 containerd[1505]: time="2026-04-21T10:13:29.078398568Z" level=info msg="StopPodSandbox for \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\" returns successfully" Apr 21 10:13:29.078971 containerd[1505]: time="2026-04-21T10:13:29.078952259Z" level=info msg="RemovePodSandbox for \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\"" Apr 21 10:13:29.079885 containerd[1505]: time="2026-04-21T10:13:29.079028709Z" level=info msg="Forcibly stopping sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\"" Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.105 [WARNING][5621] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0", GenerateName:"calico-kube-controllers-df6fcc68c-", Namespace:"calico-system", SelfLink:"", UID:"cf17b894-678a-46fa-83ff-56280e6c52d6", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"df6fcc68c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"1a324188d10ee137f0166bde8ee21896cbfc1c1113e01d2a690accb1f3a3c220", Pod:"calico-kube-controllers-df6fcc68c-6prn8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif567db58b11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.105 [INFO][5621] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.105 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" iface="eth0" netns="" Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.105 [INFO][5621] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.105 [INFO][5621] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.121 [INFO][5629] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" HandleID="k8s-pod-network.759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.121 [INFO][5629] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.121 [INFO][5629] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.127 [WARNING][5629] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" HandleID="k8s-pod-network.759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.127 [INFO][5629] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" HandleID="k8s-pod-network.759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--kube--controllers--df6fcc68c--6prn8-eth0" Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.129 [INFO][5629] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.134451 containerd[1505]: 2026-04-21 10:13:29.131 [INFO][5621] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5" Apr 21 10:13:29.134451 containerd[1505]: time="2026-04-21T10:13:29.134400274Z" level=info msg="TearDown network for sandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\" successfully" Apr 21 10:13:29.140689 containerd[1505]: time="2026-04-21T10:13:29.139698223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:13:29.140689 containerd[1505]: time="2026-04-21T10:13:29.139781573Z" level=info msg="RemovePodSandbox \"759efb8d756cdb68ec5fcb24948151e6f77f5e3fc73a7dda0ad982897e2db4a5\" returns successfully" Apr 21 10:13:29.140689 containerd[1505]: time="2026-04-21T10:13:29.140023543Z" level=info msg="StopPodSandbox for \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\"" Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.175 [WARNING][5643] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0", GenerateName:"calico-apiserver-7f7c8cd7bf-", Namespace:"calico-system", SelfLink:"", UID:"563db82c-210b-4750-a813-d95c4fea43ab", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c8cd7bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d", Pod:"calico-apiserver-7f7c8cd7bf-5zwrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali54a48d2140c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.176 [INFO][5643] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.176 [INFO][5643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" iface="eth0" netns="" Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.176 [INFO][5643] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.176 [INFO][5643] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.190 [INFO][5650] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" HandleID="k8s-pod-network.81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.190 [INFO][5650] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.190 [INFO][5650] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.195 [WARNING][5650] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" HandleID="k8s-pod-network.81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.195 [INFO][5650] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" HandleID="k8s-pod-network.81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.197 [INFO][5650] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.200692 containerd[1505]: 2026-04-21 10:13:29.198 [INFO][5643] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:29.201157 containerd[1505]: time="2026-04-21T10:13:29.200725029Z" level=info msg="TearDown network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\" successfully" Apr 21 10:13:29.201157 containerd[1505]: time="2026-04-21T10:13:29.200752939Z" level=info msg="StopPodSandbox for \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\" returns successfully" Apr 21 10:13:29.201322 containerd[1505]: time="2026-04-21T10:13:29.201304369Z" level=info msg="RemovePodSandbox for \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\"" Apr 21 10:13:29.201353 containerd[1505]: time="2026-04-21T10:13:29.201325060Z" level=info msg="Forcibly stopping sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\"" Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.227 [WARNING][5664] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0", GenerateName:"calico-apiserver-7f7c8cd7bf-", Namespace:"calico-system", SelfLink:"", UID:"563db82c-210b-4750-a813-d95c4fea43ab", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7c8cd7bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"bc0a9dbe5d33a010d60a8167aa70b3ac81b8211de1f35e1f089679bca756200d", Pod:"calico-apiserver-7f7c8cd7bf-5zwrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali54a48d2140c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.227 [INFO][5664] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.227 [INFO][5664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" iface="eth0" netns="" Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.227 [INFO][5664] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.227 [INFO][5664] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.242 [INFO][5673] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" HandleID="k8s-pod-network.81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.242 [INFO][5673] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.242 [INFO][5673] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.247 [WARNING][5673] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" HandleID="k8s-pod-network.81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.247 [INFO][5673] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" HandleID="k8s-pod-network.81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Workload="ci--4081--3--7--5--d97ac59edd-k8s-calico--apiserver--7f7c8cd7bf--5zwrh-eth0" Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.249 [INFO][5673] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.253504 containerd[1505]: 2026-04-21 10:13:29.251 [INFO][5664] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e" Apr 21 10:13:29.253898 containerd[1505]: time="2026-04-21T10:13:29.253551150Z" level=info msg="TearDown network for sandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\" successfully" Apr 21 10:13:29.257545 containerd[1505]: time="2026-04-21T10:13:29.257520026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:13:29.257630 containerd[1505]: time="2026-04-21T10:13:29.257568966Z" level=info msg="RemovePodSandbox \"81abb21cfb6e7e53a57caad5a349bb8d97461eae08a94922bf384b8a41eea58e\" returns successfully" Apr 21 10:13:29.257946 containerd[1505]: time="2026-04-21T10:13:29.257928186Z" level=info msg="StopPodSandbox for \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\"" Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.281 [WARNING][5688] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.281 [INFO][5688] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.281 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" iface="eth0" netns="" Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.281 [INFO][5688] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.281 [INFO][5688] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.296 [INFO][5695] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" HandleID="k8s-pod-network.8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.296 [INFO][5695] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.296 [INFO][5695] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.301 [WARNING][5695] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" HandleID="k8s-pod-network.8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.301 [INFO][5695] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" HandleID="k8s-pod-network.8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.302 [INFO][5695] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.306521 containerd[1505]: 2026-04-21 10:13:29.304 [INFO][5688] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:29.306521 containerd[1505]: time="2026-04-21T10:13:29.306427441Z" level=info msg="TearDown network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\" successfully" Apr 21 10:13:29.306521 containerd[1505]: time="2026-04-21T10:13:29.306448711Z" level=info msg="StopPodSandbox for \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\" returns successfully" Apr 21 10:13:29.307046 containerd[1505]: time="2026-04-21T10:13:29.306986661Z" level=info msg="RemovePodSandbox for \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\"" Apr 21 10:13:29.307046 containerd[1505]: time="2026-04-21T10:13:29.307006742Z" level=info msg="Forcibly stopping sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\"" Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.332 [WARNING][5709] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" WorkloadEndpoint="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.332 [INFO][5709] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.332 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" iface="eth0" netns="" Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.332 [INFO][5709] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.332 [INFO][5709] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.348 [INFO][5717] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" HandleID="k8s-pod-network.8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.349 [INFO][5717] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.349 [INFO][5717] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.355 [WARNING][5717] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" HandleID="k8s-pod-network.8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.355 [INFO][5717] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" HandleID="k8s-pod-network.8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Workload="ci--4081--3--7--5--d97ac59edd-k8s-whisker--85c6676df8--6p4r8-eth0" Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.356 [INFO][5717] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.360971 containerd[1505]: 2026-04-21 10:13:29.358 [INFO][5709] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656" Apr 21 10:13:29.360971 containerd[1505]: time="2026-04-21T10:13:29.360965254Z" level=info msg="TearDown network for sandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\" successfully" Apr 21 10:13:29.364424 containerd[1505]: time="2026-04-21T10:13:29.364394961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:13:29.364509 containerd[1505]: time="2026-04-21T10:13:29.364447681Z" level=info msg="RemovePodSandbox \"8f791e63baee9c0b8e0b68818c345defcbba2d9752d05a55b0d871846039b656\" returns successfully" Apr 21 10:13:29.364989 containerd[1505]: time="2026-04-21T10:13:29.364952141Z" level=info msg="StopPodSandbox for \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\"" Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.391 [WARNING][5732] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f", Pod:"coredns-66bc5c9577-9qxs6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71bca3af798", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.391 [INFO][5732] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.391 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" iface="eth0" netns="" Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.391 [INFO][5732] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.391 [INFO][5732] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.408 [INFO][5740] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" HandleID="k8s-pod-network.280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.408 [INFO][5740] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.408 [INFO][5740] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.412 [WARNING][5740] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" HandleID="k8s-pod-network.280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.412 [INFO][5740] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" HandleID="k8s-pod-network.280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.413 [INFO][5740] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.417807 containerd[1505]: 2026-04-21 10:13:29.415 [INFO][5732] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:29.417807 containerd[1505]: time="2026-04-21T10:13:29.417697972Z" level=info msg="TearDown network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\" successfully" Apr 21 10:13:29.417807 containerd[1505]: time="2026-04-21T10:13:29.417719212Z" level=info msg="StopPodSandbox for \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\" returns successfully" Apr 21 10:13:29.418237 containerd[1505]: time="2026-04-21T10:13:29.418144703Z" level=info msg="RemovePodSandbox for \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\"" Apr 21 10:13:29.418237 containerd[1505]: time="2026-04-21T10:13:29.418165223Z" level=info msg="Forcibly stopping sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\"" Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.447 [WARNING][5755] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4deaa4d-ddd7-4d60-919c-5bfca2a7dd6c", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"d5b71745cd28fd3d1b9bbb9205e0f56906809d1cb402a06800bee384934b145f", Pod:"coredns-66bc5c9577-9qxs6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71bca3af798", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.448 [INFO][5755] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.448 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" iface="eth0" netns="" Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.448 [INFO][5755] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.448 [INFO][5755] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.464 [INFO][5762] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" HandleID="k8s-pod-network.280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.464 [INFO][5762] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.464 [INFO][5762] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.468 [WARNING][5762] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" HandleID="k8s-pod-network.280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.468 [INFO][5762] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" HandleID="k8s-pod-network.280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--9qxs6-eth0" Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.469 [INFO][5762] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.473823 containerd[1505]: 2026-04-21 10:13:29.471 [INFO][5755] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c" Apr 21 10:13:29.473823 containerd[1505]: time="2026-04-21T10:13:29.473660008Z" level=info msg="TearDown network for sandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\" successfully" Apr 21 10:13:29.478028 containerd[1505]: time="2026-04-21T10:13:29.477991426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:13:29.478028 containerd[1505]: time="2026-04-21T10:13:29.478048566Z" level=info msg="RemovePodSandbox \"280f0288d1631eb27248e94aa6cc734ce242581e9934da4e5784c8b939df489c\" returns successfully" Apr 21 10:13:29.478728 containerd[1505]: time="2026-04-21T10:13:29.478478027Z" level=info msg="StopPodSandbox for \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\"" Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.509 [WARNING][5777] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"df642170-a65e-4ca4-b9a5-68415ff77a88", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576", Pod:"coredns-66bc5c9577-n2ccb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66358944cd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.509 [INFO][5777] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.509 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" iface="eth0" netns="" Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.509 [INFO][5777] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.509 [INFO][5777] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.527 [INFO][5784] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" HandleID="k8s-pod-network.729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.527 [INFO][5784] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.527 [INFO][5784] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.532 [WARNING][5784] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" HandleID="k8s-pod-network.729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.532 [INFO][5784] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" HandleID="k8s-pod-network.729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.533 [INFO][5784] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.538169 containerd[1505]: 2026-04-21 10:13:29.535 [INFO][5777] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:29.538760 containerd[1505]: time="2026-04-21T10:13:29.538215069Z" level=info msg="TearDown network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\" successfully" Apr 21 10:13:29.538760 containerd[1505]: time="2026-04-21T10:13:29.538245310Z" level=info msg="StopPodSandbox for \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\" returns successfully" Apr 21 10:13:29.538760 containerd[1505]: time="2026-04-21T10:13:29.538752230Z" level=info msg="RemovePodSandbox for \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\"" Apr 21 10:13:29.538892 containerd[1505]: time="2026-04-21T10:13:29.538777510Z" level=info msg="Forcibly stopping sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\"" Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.571 [WARNING][5800] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"df642170-a65e-4ca4-b9a5-68415ff77a88", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"396c5b1986f6cef7cd2c2269241a5349d71ca6259b6c92c7fb1ca59d905de576", Pod:"coredns-66bc5c9577-n2ccb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66358944cd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.571 [INFO][5800] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.571 [INFO][5800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" iface="eth0" netns="" Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.571 [INFO][5800] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.571 [INFO][5800] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.589 [INFO][5807] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" HandleID="k8s-pod-network.729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.589 [INFO][5807] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.589 [INFO][5807] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.595 [WARNING][5807] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" HandleID="k8s-pod-network.729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.595 [INFO][5807] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" HandleID="k8s-pod-network.729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Workload="ci--4081--3--7--5--d97ac59edd-k8s-coredns--66bc5c9577--n2ccb-eth0" Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.596 [INFO][5807] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.601381 containerd[1505]: 2026-04-21 10:13:29.599 [INFO][5800] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea" Apr 21 10:13:29.601704 containerd[1505]: time="2026-04-21T10:13:29.601652648Z" level=info msg="TearDown network for sandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\" successfully" Apr 21 10:13:29.606428 containerd[1505]: time="2026-04-21T10:13:29.606394627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:13:29.606490 containerd[1505]: time="2026-04-21T10:13:29.606458427Z" level=info msg="RemovePodSandbox \"729defdb21e95d60b33aa1bc305547c39b702435c9c7b302caa22c317ae080ea\" returns successfully" Apr 21 10:13:29.607019 containerd[1505]: time="2026-04-21T10:13:29.606992188Z" level=info msg="StopPodSandbox for \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\"" Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.633 [WARNING][5822] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45c6765d-bd67-4624-ba22-eae28e77978f", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f", Pod:"csi-node-driver-jm9tq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc6c5ad985b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.633 [INFO][5822] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.633 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" iface="eth0" netns="" Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.633 [INFO][5822] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.633 [INFO][5822] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.650 [INFO][5830] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" HandleID="k8s-pod-network.a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.651 [INFO][5830] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.651 [INFO][5830] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.655 [WARNING][5830] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" HandleID="k8s-pod-network.a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.655 [INFO][5830] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" HandleID="k8s-pod-network.a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.656 [INFO][5830] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.661151 containerd[1505]: 2026-04-21 10:13:29.659 [INFO][5822] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:29.661151 containerd[1505]: time="2026-04-21T10:13:29.661158761Z" level=info msg="TearDown network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\" successfully" Apr 21 10:13:29.661151 containerd[1505]: time="2026-04-21T10:13:29.661183271Z" level=info msg="StopPodSandbox for \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\" returns successfully" Apr 21 10:13:29.662500 containerd[1505]: time="2026-04-21T10:13:29.661698392Z" level=info msg="RemovePodSandbox for \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\"" Apr 21 10:13:29.662500 containerd[1505]: time="2026-04-21T10:13:29.661734103Z" level=info msg="Forcibly stopping sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\"" Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.693 [WARNING][5845] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45c6765d-bd67-4624-ba22-eae28e77978f", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"535e0d6059ca1f337783707f342a284f1483066cfde361d78d8f9ffc6037d30f", Pod:"csi-node-driver-jm9tq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc6c5ad985b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.693 [INFO][5845] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.693 [INFO][5845] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" iface="eth0" netns="" Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.693 [INFO][5845] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.693 [INFO][5845] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.710 [INFO][5852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" HandleID="k8s-pod-network.a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.710 [INFO][5852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.710 [INFO][5852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.714 [WARNING][5852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" HandleID="k8s-pod-network.a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.714 [INFO][5852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" HandleID="k8s-pod-network.a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Workload="ci--4081--3--7--5--d97ac59edd-k8s-csi--node--driver--jm9tq-eth0" Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.716 [INFO][5852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.721563 containerd[1505]: 2026-04-21 10:13:29.718 [INFO][5845] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9" Apr 21 10:13:29.721563 containerd[1505]: time="2026-04-21T10:13:29.720015103Z" level=info msg="TearDown network for sandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\" successfully" Apr 21 10:13:29.724013 containerd[1505]: time="2026-04-21T10:13:29.723983679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:13:29.724107 containerd[1505]: time="2026-04-21T10:13:29.724034549Z" level=info msg="RemovePodSandbox \"a1c351f49d6810a7fe3e1fb2891d7dc78168455a4a8acd850a2e5c85b38fd4a9\" returns successfully" Apr 21 10:13:29.724620 containerd[1505]: time="2026-04-21T10:13:29.724409381Z" level=info msg="StopPodSandbox for \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\"" Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.751 [WARNING][5866] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"33a681f4-231e-46d8-acc0-27e3c3cf72d1", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4", Pod:"goldmane-cccfbd5cf-8vcnx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7409c8746b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.752 [INFO][5866] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.752 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" iface="eth0" netns="" Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.752 [INFO][5866] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.752 [INFO][5866] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.768 [INFO][5873] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" HandleID="k8s-pod-network.41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.769 [INFO][5873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.769 [INFO][5873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.773 [WARNING][5873] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" HandleID="k8s-pod-network.41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.773 [INFO][5873] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" HandleID="k8s-pod-network.41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.775 [INFO][5873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.778729 containerd[1505]: 2026-04-21 10:13:29.776 [INFO][5866] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:29.779436 containerd[1505]: time="2026-04-21T10:13:29.778772134Z" level=info msg="TearDown network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\" successfully" Apr 21 10:13:29.779436 containerd[1505]: time="2026-04-21T10:13:29.778794324Z" level=info msg="StopPodSandbox for \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\" returns successfully" Apr 21 10:13:29.779436 containerd[1505]: time="2026-04-21T10:13:29.779281444Z" level=info msg="RemovePodSandbox for \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\"" Apr 21 10:13:29.779436 containerd[1505]: time="2026-04-21T10:13:29.779306434Z" level=info msg="Forcibly stopping sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\"" Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.811 [WARNING][5887] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"33a681f4-231e-46d8-acc0-27e3c3cf72d1", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5-d97ac59edd", ContainerID:"a99fd9e2799f8853956315c941dffe354c39090438e75dfdf70b5f5f3e6493c4", Pod:"goldmane-cccfbd5cf-8vcnx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7409c8746b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.811 [INFO][5887] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.811 [INFO][5887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" iface="eth0" netns="" Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.811 [INFO][5887] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.811 [INFO][5887] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.827 [INFO][5895] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" HandleID="k8s-pod-network.41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.827 [INFO][5895] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.827 [INFO][5895] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.831 [WARNING][5895] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" HandleID="k8s-pod-network.41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.832 [INFO][5895] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" HandleID="k8s-pod-network.41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Workload="ci--4081--3--7--5--d97ac59edd-k8s-goldmane--cccfbd5cf--8vcnx-eth0" Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.833 [INFO][5895] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:13:29.837612 containerd[1505]: 2026-04-21 10:13:29.835 [INFO][5887] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a" Apr 21 10:13:29.838175 containerd[1505]: time="2026-04-21T10:13:29.838129396Z" level=info msg="TearDown network for sandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\" successfully" Apr 21 10:13:29.841851 containerd[1505]: time="2026-04-21T10:13:29.841794973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:13:29.841907 containerd[1505]: time="2026-04-21T10:13:29.841868873Z" level=info msg="RemovePodSandbox \"41b2d2063386bdd15d45c7b26cd8fe8adb6e1ea1e5a4b2a649db43cb9e53e65a\" returns successfully" Apr 21 10:13:36.988517 systemd[1]: run-containerd-runc-k8s.io-e15e7d8e9c1b4ceb67982a033723b22eed2e68503bd1ac550eaab09118691dd6-runc.oncWHr.mount: Deactivated successfully. Apr 21 10:13:43.105043 systemd[1]: run-containerd-runc-k8s.io-541a2c5353774100cf2144a0659194fcecf324d78f9cc9c7161ba17504f6f33f-runc.hBJhoy.mount: Deactivated successfully. Apr 21 10:13:45.767171 systemd[1]: Started sshd@7-46.62.167.148:22-50.85.169.122:46500.service - OpenSSH per-connection server daemon (50.85.169.122:46500). Apr 21 10:13:45.988424 sshd[5961]: Accepted publickey for core from 50.85.169.122 port 46500 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:45.991667 sshd[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:46.000451 systemd-logind[1496]: New session 8 of user core. Apr 21 10:13:46.005025 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:13:46.227047 sshd[5961]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:46.231776 systemd[1]: sshd@7-46.62.167.148:22-50.85.169.122:46500.service: Deactivated successfully. Apr 21 10:13:46.233697 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:13:46.234749 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:13:46.235683 systemd-logind[1496]: Removed session 8. Apr 21 10:13:50.833525 kubelet[2567]: I0421 10:13:50.833277 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:13:51.276077 systemd[1]: Started sshd@8-46.62.167.148:22-50.85.169.122:48588.service - OpenSSH per-connection server daemon (50.85.169.122:48588). Apr 21 10:13:51.500768 sshd[6021]: Accepted publickey for core from 50.85.169.122 port 48588 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:51.503467 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:51.512257 systemd-logind[1496]: New session 9 of user core. Apr 21 10:13:51.520041 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:13:51.747267 sshd[6021]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:51.751034 systemd[1]: sshd@8-46.62.167.148:22-50.85.169.122:48588.service: Deactivated successfully. Apr 21 10:13:51.752910 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:13:51.754657 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:13:51.756931 systemd-logind[1496]: Removed session 9. Apr 21 10:13:56.797242 systemd[1]: Started sshd@9-46.62.167.148:22-50.85.169.122:48590.service - OpenSSH per-connection server daemon (50.85.169.122:48590). Apr 21 10:13:57.024500 sshd[6080]: Accepted publickey for core from 50.85.169.122 port 48590 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:57.031210 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:57.039194 systemd-logind[1496]: New session 10 of user core. Apr 21 10:13:57.049091 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:13:57.295991 sshd[6080]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:57.302188 systemd[1]: sshd@9-46.62.167.148:22-50.85.169.122:48590.service: Deactivated successfully. Apr 21 10:13:57.304552 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:13:57.305641 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:13:57.307223 systemd-logind[1496]: Removed session 10. Apr 21 10:14:02.350239 systemd[1]: Started sshd@10-46.62.167.148:22-50.85.169.122:48440.service - OpenSSH per-connection server daemon (50.85.169.122:48440). Apr 21 10:14:02.573870 sshd[6110]: Accepted publickey for core from 50.85.169.122 port 48440 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:02.576147 sshd[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:02.583893 systemd-logind[1496]: New session 11 of user core. Apr 21 10:14:02.592089 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:14:02.825952 sshd[6110]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:02.833716 systemd[1]: sshd@10-46.62.167.148:22-50.85.169.122:48440.service: Deactivated successfully. Apr 21 10:14:02.837429 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:14:02.840031 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:14:02.842739 systemd-logind[1496]: Removed session 11. Apr 21 10:14:02.871288 systemd[1]: Started sshd@11-46.62.167.148:22-50.85.169.122:48452.service - OpenSSH per-connection server daemon (50.85.169.122:48452). Apr 21 10:14:03.073717 sshd[6124]: Accepted publickey for core from 50.85.169.122 port 48452 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:03.077303 sshd[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:03.089113 systemd-logind[1496]: New session 12 of user core. Apr 21 10:14:03.099172 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:14:03.329464 sshd[6124]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:03.334297 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:14:03.337284 systemd[1]: sshd@11-46.62.167.148:22-50.85.169.122:48452.service: Deactivated successfully. Apr 21 10:14:03.341130 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:14:03.345052 systemd-logind[1496]: Removed session 12. Apr 21 10:14:03.376912 systemd[1]: Started sshd@12-46.62.167.148:22-50.85.169.122:48458.service - OpenSSH per-connection server daemon (50.85.169.122:48458). Apr 21 10:14:03.587914 sshd[6137]: Accepted publickey for core from 50.85.169.122 port 48458 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:03.590015 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:03.595356 systemd-logind[1496]: New session 13 of user core. Apr 21 10:14:03.599166 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:14:03.832257 sshd[6137]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:03.836527 systemd[1]: sshd@12-46.62.167.148:22-50.85.169.122:48458.service: Deactivated successfully. Apr 21 10:14:03.838479 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:14:03.840782 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:14:03.842290 systemd-logind[1496]: Removed session 13. Apr 21 10:14:08.881862 systemd[1]: Started sshd@13-46.62.167.148:22-50.85.169.122:48468.service - OpenSSH per-connection server daemon (50.85.169.122:48468). Apr 21 10:14:09.106443 sshd[6174]: Accepted publickey for core from 50.85.169.122 port 48468 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:09.108957 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:09.117922 systemd-logind[1496]: New session 14 of user core. Apr 21 10:14:09.122066 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:14:09.375481 sshd[6174]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:09.381198 systemd[1]: sshd@13-46.62.167.148:22-50.85.169.122:48468.service: Deactivated successfully. Apr 21 10:14:09.381445 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:14:09.383681 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:14:09.386379 systemd-logind[1496]: Removed session 14. Apr 21 10:14:09.414086 systemd[1]: Started sshd@14-46.62.167.148:22-50.85.169.122:48480.service - OpenSSH per-connection server daemon (50.85.169.122:48480). Apr 21 10:14:09.620626 sshd[6187]: Accepted publickey for core from 50.85.169.122 port 48480 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:09.623464 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:09.629618 systemd-logind[1496]: New session 15 of user core. Apr 21 10:14:09.639992 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:14:10.027738 sshd[6187]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:10.031362 systemd[1]: sshd@14-46.62.167.148:22-50.85.169.122:48480.service: Deactivated successfully. Apr 21 10:14:10.033597 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:14:10.035723 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:14:10.037027 systemd-logind[1496]: Removed session 15. Apr 21 10:14:10.072618 systemd[1]: Started sshd@15-46.62.167.148:22-50.85.169.122:41290.service - OpenSSH per-connection server daemon (50.85.169.122:41290). Apr 21 10:14:10.284513 sshd[6198]: Accepted publickey for core from 50.85.169.122 port 41290 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:10.287783 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:10.296716 systemd-logind[1496]: New session 16 of user core. Apr 21 10:14:10.306067 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:14:11.175454 sshd[6198]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:11.179988 systemd[1]: sshd@15-46.62.167.148:22-50.85.169.122:41290.service: Deactivated successfully. Apr 21 10:14:11.180315 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:14:11.183126 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:14:11.187007 systemd-logind[1496]: Removed session 16. Apr 21 10:14:11.218248 systemd[1]: Started sshd@16-46.62.167.148:22-50.85.169.122:41292.service - OpenSSH per-connection server daemon (50.85.169.122:41292). Apr 21 10:14:11.426253 sshd[6223]: Accepted publickey for core from 50.85.169.122 port 41292 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:11.429218 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:11.438147 systemd-logind[1496]: New session 17 of user core. Apr 21 10:14:11.443451 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:14:11.777355 sshd[6223]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:11.782350 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:14:11.782667 systemd[1]: sshd@16-46.62.167.148:22-50.85.169.122:41292.service: Deactivated successfully. Apr 21 10:14:11.786467 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:14:11.788166 systemd-logind[1496]: Removed session 17. Apr 21 10:14:11.824305 systemd[1]: Started sshd@17-46.62.167.148:22-50.85.169.122:41306.service - OpenSSH per-connection server daemon (50.85.169.122:41306). Apr 21 10:14:12.028661 sshd[6234]: Accepted publickey for core from 50.85.169.122 port 41306 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:12.031791 sshd[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:12.039944 systemd-logind[1496]: New session 18 of user core. Apr 21 10:14:12.048186 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:14:12.263483 sshd[6234]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:12.269663 systemd[1]: sshd@17-46.62.167.148:22-50.85.169.122:41306.service: Deactivated successfully. Apr 21 10:14:12.273778 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:14:12.274944 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:14:12.276149 systemd-logind[1496]: Removed session 18. Apr 21 10:14:17.309265 systemd[1]: Started sshd@18-46.62.167.148:22-50.85.169.122:41314.service - OpenSSH per-connection server daemon (50.85.169.122:41314). Apr 21 10:14:17.530599 sshd[6251]: Accepted publickey for core from 50.85.169.122 port 41314 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:17.533603 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:17.542580 systemd-logind[1496]: New session 19 of user core. Apr 21 10:14:17.551055 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:14:17.764189 sshd[6251]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:17.769226 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:14:17.770146 systemd[1]: sshd@18-46.62.167.148:22-50.85.169.122:41314.service: Deactivated successfully. Apr 21 10:14:17.774421 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:14:17.776745 systemd-logind[1496]: Removed session 19. Apr 21 10:14:22.817398 systemd[1]: Started sshd@19-46.62.167.148:22-50.85.169.122:58258.service - OpenSSH per-connection server daemon (50.85.169.122:58258). Apr 21 10:14:23.042242 sshd[6302]: Accepted publickey for core from 50.85.169.122 port 58258 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:14:23.045109 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:14:23.050452 systemd-logind[1496]: New session 20 of user core. Apr 21 10:14:23.056034 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:14:23.288210 sshd[6302]: pam_unix(sshd:session): session closed for user core Apr 21 10:14:23.293060 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:14:23.293426 systemd[1]: sshd@19-46.62.167.148:22-50.85.169.122:58258.service: Deactivated successfully. Apr 21 10:14:23.295513 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:14:23.297717 systemd-logind[1496]: Removed session 20. Apr 21 10:14:35.801248 systemd[1]: Started sshd@20-46.62.167.148:22-222.128.15.127:55336.service - OpenSSH per-connection server daemon (222.128.15.127:55336). Apr 21 10:14:39.749915 sshd[6327]: Invalid user default from 222.128.15.127 port 55336 Apr 21 10:14:40.945592 systemd[1]: cri-containerd-8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6.scope: Deactivated successfully. Apr 21 10:14:40.946134 systemd[1]: cri-containerd-8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6.scope: Consumed 3.120s CPU time, 18.5M memory peak, 0B memory swap peak. Apr 21 10:14:40.980518 containerd[1505]: time="2026-04-21T10:14:40.980103234Z" level=info msg="shim disconnected" id=8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6 namespace=k8s.io Apr 21 10:14:40.980518 containerd[1505]: time="2026-04-21T10:14:40.980166964Z" level=warning msg="cleaning up after shim disconnected" id=8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6 namespace=k8s.io Apr 21 10:14:40.980518 containerd[1505]: time="2026-04-21T10:14:40.980180184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:40.982714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6-rootfs.mount: Deactivated successfully. Apr 21 10:14:41.169373 kubelet[2567]: E0421 10:14:41.169235 2567 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57388->10.0.0.2:2379: read: connection timed out" Apr 21 10:14:41.319352 sshd[6415]: pam_faillock(sshd:auth): User unknown Apr 21 10:14:41.325689 sshd[6327]: Postponed keyboard-interactive for invalid user default from 222.128.15.127 port 55336 ssh2 [preauth] Apr 21 10:14:41.412580 kubelet[2567]: I0421 10:14:41.411991 2567 scope.go:117] "RemoveContainer" containerID="8b97f75bf2176ea02877e1b7ba5a1cc39ecd084323449a79da05d16eb4d272b6" Apr 21 10:14:41.416256 containerd[1505]: time="2026-04-21T10:14:41.416180001Z" level=info msg="CreateContainer within sandbox \"c02e61979ab64b28522719faa1deca5122a6f789a53211254a016ae74e8ddc8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:14:41.438159 containerd[1505]: time="2026-04-21T10:14:41.438090842Z" level=info msg="CreateContainer within sandbox \"c02e61979ab64b28522719faa1deca5122a6f789a53211254a016ae74e8ddc8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"624e5be738bdeb9454d42f7d12cac1b37bcd63a1dcc26365fde6ddd475cb9d0a\"" Apr 21 10:14:41.441029 containerd[1505]: time="2026-04-21T10:14:41.438981829Z" level=info msg="StartContainer for \"624e5be738bdeb9454d42f7d12cac1b37bcd63a1dcc26365fde6ddd475cb9d0a\"" Apr 21 10:14:41.491420 systemd[1]: cri-containerd-48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75.scope: Deactivated successfully. Apr 21 10:14:41.491791 systemd[1]: cri-containerd-48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75.scope: Consumed 6.700s CPU time. Apr 21 10:14:41.500651 systemd[1]: Started cri-containerd-624e5be738bdeb9454d42f7d12cac1b37bcd63a1dcc26365fde6ddd475cb9d0a.scope - libcontainer container 624e5be738bdeb9454d42f7d12cac1b37bcd63a1dcc26365fde6ddd475cb9d0a. Apr 21 10:14:41.514720 containerd[1505]: time="2026-04-21T10:14:41.514001747Z" level=info msg="shim disconnected" id=48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75 namespace=k8s.io Apr 21 10:14:41.514720 containerd[1505]: time="2026-04-21T10:14:41.514079686Z" level=warning msg="cleaning up after shim disconnected" id=48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75 namespace=k8s.io Apr 21 10:14:41.514720 containerd[1505]: time="2026-04-21T10:14:41.514086376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:41.541953 containerd[1505]: time="2026-04-21T10:14:41.541847673Z" level=info msg="StartContainer for \"624e5be738bdeb9454d42f7d12cac1b37bcd63a1dcc26365fde6ddd475cb9d0a\" returns successfully" Apr 21 10:14:41.984796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75-rootfs.mount: Deactivated successfully. Apr 21 10:14:42.203079 sshd[6415]: pam_unix(sshd:auth): check pass; user unknown Apr 21 10:14:42.203144 sshd[6415]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=222.128.15.127 Apr 21 10:14:42.204410 sshd[6415]: pam_faillock(sshd:auth): User unknown Apr 21 10:14:42.416198 kubelet[2567]: I0421 10:14:42.415984 2567 scope.go:117] "RemoveContainer" containerID="48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75" Apr 21 10:14:42.419618 containerd[1505]: time="2026-04-21T10:14:42.419571425Z" level=info msg="CreateContainer within sandbox \"b6495645c4b05368ba8999d5e524ca9d5a83cb4c3cb139d37e733a80250abcc1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 21 10:14:42.434876 containerd[1505]: time="2026-04-21T10:14:42.434213002Z" level=info msg="CreateContainer within sandbox \"b6495645c4b05368ba8999d5e524ca9d5a83cb4c3cb139d37e733a80250abcc1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69\"" Apr 21 10:14:42.436381 containerd[1505]: time="2026-04-21T10:14:42.435703609Z" level=info msg="StartContainer for \"fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69\"" Apr 21 10:14:42.474948 systemd[1]: Started cri-containerd-fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69.scope - libcontainer container fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69. Apr 21 10:14:42.497571 containerd[1505]: time="2026-04-21T10:14:42.497541550Z" level=info msg="StartContainer for \"fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69\" returns successfully" Apr 21 10:14:44.020119 sshd[6327]: PAM: Permission denied for illegal user default from 222.128.15.127 Apr 21 10:14:44.020911 sshd[6327]: Failed keyboard-interactive/pam for invalid user default from 222.128.15.127 port 55336 ssh2 Apr 21 10:14:44.780754 sshd[6327]: Connection closed by invalid user default 222.128.15.127 port 55336 [preauth] Apr 21 10:14:44.785323 systemd[1]: sshd@20-46.62.167.148:22-222.128.15.127:55336.service: Deactivated successfully. Apr 21 10:14:45.824278 systemd[1]: cri-containerd-9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285.scope: Deactivated successfully. Apr 21 10:14:45.825269 systemd[1]: cri-containerd-9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285.scope: Consumed 1.564s CPU time, 16.3M memory peak, 0B memory swap peak. Apr 21 10:14:45.848522 containerd[1505]: time="2026-04-21T10:14:45.848373696Z" level=info msg="shim disconnected" id=9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285 namespace=k8s.io Apr 21 10:14:45.848522 containerd[1505]: time="2026-04-21T10:14:45.848434756Z" level=warning msg="cleaning up after shim disconnected" id=9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285 namespace=k8s.io Apr 21 10:14:45.848522 containerd[1505]: time="2026-04-21T10:14:45.848444206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:45.849451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285-rootfs.mount: Deactivated successfully. Apr 21 10:14:45.905227 kubelet[2567]: E0421 10:14:45.904105 2567 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56972->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-5-d97ac59edd.18a857b21a1008f9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-5-d97ac59edd,UID:acd7cb6fee4c7b4fe575b5ad77bb45ef,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-5-d97ac59edd,},FirstTimestamp:2026-04-21 10:14:35.450714361 +0000 UTC m=+126.655405630,LastTimestamp:2026-04-21 10:14:35.450714361 +0000 UTC m=+126.655405630,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-5-d97ac59edd,}" Apr 21 10:14:46.430664 kubelet[2567]: I0421 10:14:46.430630 2567 scope.go:117] "RemoveContainer" containerID="9072b559cee5bb86dc26fc853e4c5d1251ca5a5a7c1b0d6fee92792454dbf285" Apr 21 10:14:46.432214 containerd[1505]: time="2026-04-21T10:14:46.432168083Z" level=info msg="CreateContainer within sandbox \"07ec4917afb8c7e896202317476704aec7dd0054028c25b2fb8a938a7987638c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:14:46.441495 containerd[1505]: time="2026-04-21T10:14:46.441442743Z" level=info msg="CreateContainer within sandbox \"07ec4917afb8c7e896202317476704aec7dd0054028c25b2fb8a938a7987638c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2ddcc73086a3f0146edc2b5102c3f47640e7e9c7fe2119226e4d4b510cbd4f28\"" Apr 21 10:14:46.442303 containerd[1505]: time="2026-04-21T10:14:46.441896513Z" level=info msg="StartContainer for \"2ddcc73086a3f0146edc2b5102c3f47640e7e9c7fe2119226e4d4b510cbd4f28\"" Apr 21 10:14:46.480913 systemd[1]: Started cri-containerd-2ddcc73086a3f0146edc2b5102c3f47640e7e9c7fe2119226e4d4b510cbd4f28.scope - libcontainer container 2ddcc73086a3f0146edc2b5102c3f47640e7e9c7fe2119226e4d4b510cbd4f28. Apr 21 10:14:46.517863 containerd[1505]: time="2026-04-21T10:14:46.516908526Z" level=info msg="StartContainer for \"2ddcc73086a3f0146edc2b5102c3f47640e7e9c7fe2119226e4d4b510cbd4f28\" returns successfully" Apr 21 10:14:51.170353 kubelet[2567]: E0421 10:14:51.169901 2567 controller.go:195] "Failed to update lease" err="Put \"https://46.62.167.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-5-d97ac59edd?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 21 10:14:53.671994 systemd[1]: cri-containerd-fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69.scope: Deactivated successfully. Apr 21 10:14:53.715755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69-rootfs.mount: Deactivated successfully. Apr 21 10:14:53.723337 containerd[1505]: time="2026-04-21T10:14:53.723221353Z" level=info msg="shim disconnected" id=fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69 namespace=k8s.io Apr 21 10:14:53.723337 containerd[1505]: time="2026-04-21T10:14:53.723298743Z" level=warning msg="cleaning up after shim disconnected" id=fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69 namespace=k8s.io Apr 21 10:14:53.723337 containerd[1505]: time="2026-04-21T10:14:53.723315553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:54.458838 kubelet[2567]: I0421 10:14:54.458630 2567 scope.go:117] "RemoveContainer" containerID="48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75" Apr 21 10:14:54.459239 kubelet[2567]: I0421 10:14:54.458966 2567 scope.go:117] "RemoveContainer" containerID="fec46d8a4156a759838798c998bcdef5ec36d0c57dd07811fecf2fc117016d69" Apr 21 10:14:54.459239 kubelet[2567]: E0421 10:14:54.459107 2567 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5588576f44-xsnhd_tigera-operator(ff771875-d449-4eb1-8191-7ebf62566a15)\"" pod="tigera-operator/tigera-operator-5588576f44-xsnhd" podUID="ff771875-d449-4eb1-8191-7ebf62566a15" Apr 21 10:14:54.460141 containerd[1505]: time="2026-04-21T10:14:54.460090740Z" level=info msg="RemoveContainer for \"48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75\"" Apr 21 10:14:54.463851 containerd[1505]: time="2026-04-21T10:14:54.463802283Z" level=info msg="RemoveContainer for \"48c582689425870680edd17a6871b6bcf8018aef763cf91226758ba8b05c9f75\" returns successfully"