Apr 13 20:12:19.965484 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:12:19.965501 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:12:19.965510 kernel: BIOS-provided physical RAM map: Apr 13 20:12:19.965515 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 20:12:19.965519 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 13 20:12:19.965523 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 13 20:12:19.965528 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 13 20:12:19.965533 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Apr 13 20:12:19.965537 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Apr 13 20:12:19.965541 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Apr 13 20:12:19.965546 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 13 20:12:19.965553 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 13 20:12:19.965557 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 13 20:12:19.965562 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 13 20:12:19.965567 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 13 20:12:19.965571 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:12:19.965578 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 13 20:12:19.965594 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 13 20:12:19.965598 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:12:19.965603 kernel: NX (Execute Disable) protection: active Apr 13 20:12:19.965607 kernel: APIC: Static calls initialized Apr 13 20:12:19.965612 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 20:12:19.965617 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e84f198 Apr 13 20:12:19.965622 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 13 20:12:19.965626 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 13 20:12:19.965631 kernel: SMBIOS 3.0.0 present. Apr 13 20:12:19.965635 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 13 20:12:19.965640 kernel: Hypervisor detected: KVM Apr 13 20:12:19.965647 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:12:19.965652 kernel: kvm-clock: using sched offset of 12719903856 cycles Apr 13 20:12:19.965656 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:12:19.965661 kernel: tsc: Detected 2399.998 MHz processor Apr 13 20:12:19.965666 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:12:19.965671 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:12:19.965676 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 13 20:12:19.965681 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 20:12:19.965685 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:12:19.965693 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 13 20:12:19.965697 kernel: Using GB pages for direct mapping Apr 13 20:12:19.965702 kernel: Secure boot disabled Apr 13 20:12:19.965710 kernel: ACPI: Early table checksum verification disabled Apr 13 20:12:19.965715 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 13 20:12:19.965720 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 13 20:12:19.965725 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:12:19.965732 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:12:19.965737 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 13 20:12:19.965753 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:12:19.965758 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:12:19.965763 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:12:19.965768 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:12:19.965773 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 20:12:19.965781 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 13 20:12:19.965786 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 13 20:12:19.965791 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 13 20:12:19.965796 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 13 20:12:19.965800 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 13 20:12:19.965805 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 13 20:12:19.965810 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 13 20:12:19.965815 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 13 20:12:19.965820 kernel: No NUMA configuration found Apr 13 20:12:19.965827 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 13 20:12:19.965832 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Apr 13 20:12:19.965837 kernel: Zone ranges: Apr 13 20:12:19.965842 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:12:19.965847 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:12:19.965852 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 13 20:12:19.965857 kernel: Movable zone start for each node Apr 13 20:12:19.965862 kernel: Early memory node ranges Apr 13 20:12:19.965867 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 20:12:19.965872 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 13 20:12:19.965879 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 13 20:12:19.965884 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 13 20:12:19.965889 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 13 20:12:19.965894 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 13 20:12:19.965899 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:12:19.965904 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 20:12:19.965908 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 13 20:12:19.965913 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:12:19.965918 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 13 20:12:19.965926 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 20:12:19.965930 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:12:19.965935 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:12:19.965940 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:12:19.965945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:12:19.965950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:12:19.965955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:12:19.965960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:12:19.965965 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:12:19.965972 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:12:19.965977 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:12:19.965982 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:12:19.965987 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:12:19.965992 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 13 20:12:19.965997 kernel: Booting paravirtualized kernel on KVM Apr 13 20:12:19.966002 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:12:19.966007 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:12:19.966012 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:12:19.966019 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:12:19.966024 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:12:19.966029 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 13 20:12:19.966034 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:12:19.966040 kernel: random: crng init done Apr 13 20:12:19.966045 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:12:19.966050 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:12:19.966055 kernel: Fallback order for Node 0: 0 Apr 13 20:12:19.966062 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Apr 13 20:12:19.966067 kernel: Policy zone: Normal Apr 13 20:12:19.966072 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:12:19.966077 kernel: software IO TLB: area num 2. Apr 13 20:12:19.966082 kernel: Memory: 3819404K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 271560K reserved, 0K cma-reserved) Apr 13 20:12:19.966087 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:12:19.966092 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:12:19.966097 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:12:19.966101 kernel: Dynamic Preempt: voluntary Apr 13 20:12:19.966109 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:12:19.966114 kernel: rcu: RCU event tracing is enabled. Apr 13 20:12:19.966119 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:12:19.966125 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:12:19.966137 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:12:19.966144 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:12:19.966149 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:12:19.966154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:12:19.966159 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:12:19.966165 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:12:19.966170 kernel: Console: colour dummy device 80x25 Apr 13 20:12:19.966175 kernel: printk: console [tty0] enabled Apr 13 20:12:19.966182 kernel: printk: console [ttyS0] enabled Apr 13 20:12:19.966187 kernel: ACPI: Core revision 20230628 Apr 13 20:12:19.966193 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:12:19.966198 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:12:19.966203 kernel: x2apic enabled Apr 13 20:12:19.966210 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:12:19.966216 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:12:19.966221 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:12:19.966226 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Apr 13 20:12:19.966231 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:12:19.966236 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:12:19.966241 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:12:19.966247 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:12:19.966252 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 13 20:12:19.966259 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:12:19.966264 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:12:19.966270 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:12:19.966275 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 13 20:12:19.966280 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:12:19.966285 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:12:19.966290 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:12:19.966295 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:12:19.966300 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:12:19.966308 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 20:12:19.966313 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 20:12:19.966318 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 20:12:19.966324 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:12:19.966329 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:12:19.966334 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 20:12:19.966339 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 20:12:19.966344 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 20:12:19.966349 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 13 20:12:19.966357 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 13 20:12:19.966362 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:12:19.966367 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:12:19.966373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:12:19.966378 kernel: landlock: Up and running. Apr 13 20:12:19.966383 kernel: SELinux: Initializing. Apr 13 20:12:19.966388 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:12:19.966393 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:12:19.966398 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 13 20:12:19.966406 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:12:19.966411 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:12:19.966416 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:12:19.966421 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:12:19.966427 kernel: ... version: 0 Apr 13 20:12:19.966432 kernel: ... bit width: 48 Apr 13 20:12:19.966437 kernel: ... generic registers: 6 Apr 13 20:12:19.966442 kernel: ... value mask: 0000ffffffffffff Apr 13 20:12:19.966447 kernel: ... max period: 00007fffffffffff Apr 13 20:12:19.966455 kernel: ... fixed-purpose events: 0 Apr 13 20:12:19.966460 kernel: ... event mask: 000000000000003f Apr 13 20:12:19.966465 kernel: signal: max sigframe size: 3376 Apr 13 20:12:19.966470 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:12:19.966475 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:12:19.966480 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:12:19.966485 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:12:19.966490 kernel: .... node #0, CPUs: #1 Apr 13 20:12:19.966496 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:12:19.966503 kernel: smpboot: Max logical packages: 1 Apr 13 20:12:19.966508 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Apr 13 20:12:19.966513 kernel: devtmpfs: initialized Apr 13 20:12:19.966518 kernel: x86/mm: Memory block size: 128MB Apr 13 20:12:19.966524 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 13 20:12:19.966529 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:12:19.966534 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:12:19.966539 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:12:19.968780 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:12:19.968791 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:12:19.968797 kernel: audit: type=2000 audit(1776111138.294:1): state=initialized audit_enabled=0 res=1 Apr 13 20:12:19.968802 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:12:19.968807 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:12:19.968813 kernel: cpuidle: using governor menu Apr 13 20:12:19.968818 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:12:19.968823 kernel: dca service started, version 1.12.1 Apr 13 20:12:19.968829 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 13 20:12:19.968834 kernel: PCI: Using configuration type 1 for base access Apr 13 20:12:19.968842 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:12:19.968847 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:12:19.968853 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:12:19.968858 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:12:19.968863 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:12:19.968870 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:12:19.968878 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:12:19.968886 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:12:19.968894 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:12:19.968902 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:12:19.968908 kernel: ACPI: Interpreter enabled Apr 13 20:12:19.968916 kernel: ACPI: PM: (supports S0 S5) Apr 13 20:12:19.968923 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:12:19.968931 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:12:19.968940 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:12:19.968947 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:12:19.968952 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:12:19.969132 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:12:19.969256 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:12:19.969356 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:12:19.969362 kernel: PCI host bridge to bus 0000:00 Apr 13 20:12:19.969481 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:12:19.969572 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:12:19.969678 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:12:19.969791 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 13 20:12:19.969878 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 13 20:12:19.969965 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 13 20:12:19.970051 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:12:19.970163 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:12:19.970266 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 13 20:12:19.970366 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Apr 13 20:12:19.970461 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Apr 13 20:12:19.970557 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Apr 13 20:12:19.970667 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 20:12:19.971196 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 13 20:12:19.971300 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:12:19.971427 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 20:12:19.971561 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Apr 13 20:12:19.971687 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 20:12:19.972249 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Apr 13 20:12:19.972360 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 20:12:19.972458 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Apr 13 20:12:19.972561 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 20:12:19.972675 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Apr 13 20:12:19.972793 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 20:12:19.972890 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Apr 13 20:12:19.972992 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 20:12:19.973088 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Apr 13 20:12:19.973192 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 20:12:19.973317 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Apr 13 20:12:19.973424 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 20:12:19.973521 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Apr 13 20:12:19.973634 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 20:12:19.974147 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Apr 13 20:12:19.974256 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:12:19.974353 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:12:19.974459 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:12:19.974602 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Apr 13 20:12:19.977042 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Apr 13 20:12:19.977184 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:12:19.977284 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Apr 13 20:12:19.977392 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 20:12:19.977497 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Apr 13 20:12:19.977608 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Apr 13 20:12:19.977709 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 20:12:19.977818 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 20:12:19.977914 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 13 20:12:19.978009 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:12:19.978115 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 20:12:19.978218 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Apr 13 20:12:19.978313 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 20:12:19.978408 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 13 20:12:19.978516 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 20:12:19.978624 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Apr 13 20:12:19.978723 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Apr 13 20:12:19.978900 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 20:12:19.978999 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 13 20:12:19.979126 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:12:19.979269 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 20:12:19.979371 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Apr 13 20:12:19.979469 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 20:12:19.979564 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:12:19.979683 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 20:12:19.979811 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Apr 13 20:12:19.979926 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Apr 13 20:12:19.980023 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 20:12:19.980138 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 13 20:12:19.980242 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:12:19.980381 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 20:12:19.980494 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Apr 13 20:12:19.980620 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Apr 13 20:12:19.980741 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 20:12:19.983955 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 13 20:12:19.984055 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:12:19.984062 kernel: acpiphp: Slot [0] registered Apr 13 20:12:19.984171 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 20:12:19.984290 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Apr 13 20:12:19.984432 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Apr 13 20:12:19.984537 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 20:12:19.984647 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 20:12:19.984753 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 13 20:12:19.984849 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:12:19.984856 kernel: acpiphp: Slot [0-2] registered Apr 13 20:12:19.984951 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 20:12:19.985046 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 13 20:12:19.985141 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:12:19.985151 kernel: acpiphp: Slot [0-3] registered Apr 13 20:12:19.985246 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 20:12:19.985341 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 13 20:12:19.985435 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:12:19.985442 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:12:19.985448 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:12:19.985453 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:12:19.985459 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:12:19.985467 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:12:19.985472 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:12:19.985478 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:12:19.985483 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:12:19.985488 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:12:19.985493 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:12:19.985499 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:12:19.985504 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:12:19.985509 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:12:19.985517 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:12:19.985522 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:12:19.985527 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:12:19.985533 kernel: iommu: Default domain type: Translated Apr 13 20:12:19.985538 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:12:19.985543 kernel: efivars: Registered efivars operations Apr 13 20:12:19.985548 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:12:19.985554 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:12:19.985559 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 13 20:12:19.985567 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 13 20:12:19.985573 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 13 20:12:19.985578 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 13 20:12:19.985684 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:12:19.990092 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:12:19.990201 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:12:19.990208 kernel: vgaarb: loaded Apr 13 20:12:19.990214 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:12:19.990220 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:12:19.990230 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:12:19.990236 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:12:19.990241 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:12:19.990247 kernel: pnp: PnP ACPI init Apr 13 20:12:19.990354 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 13 20:12:19.990363 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:12:19.990368 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:12:19.990374 kernel: NET: Registered PF_INET protocol family Apr 13 20:12:19.990395 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:12:19.990403 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:12:19.990409 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:12:19.990414 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:12:19.990420 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:12:19.990426 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:12:19.990431 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:12:19.990437 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:12:19.990442 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:12:19.990451 kernel: NET: Registered PF_XDP protocol family Apr 13 20:12:19.990555 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 13 20:12:19.990670 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 13 20:12:19.990792 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 20:12:19.990890 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 20:12:19.990985 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 20:12:19.991080 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 20:12:19.991179 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 20:12:19.991278 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 20:12:19.991378 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Apr 13 20:12:19.991485 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 20:12:19.991608 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 13 20:12:19.991706 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:12:19.991845 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 20:12:19.991941 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 13 20:12:19.992037 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 20:12:19.992131 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 13 20:12:19.992224 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:12:19.992323 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 20:12:19.992417 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:12:19.992515 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 20:12:19.992620 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 13 20:12:19.992714 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:12:19.992821 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 20:12:19.992916 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 13 20:12:19.993010 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:12:19.993110 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Apr 13 20:12:19.993208 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 20:12:19.993302 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 13 20:12:19.993396 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 13 20:12:19.993490 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:12:19.993617 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 20:12:19.993713 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 13 20:12:19.994845 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 13 20:12:19.994950 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:12:19.995048 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 20:12:19.995142 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 13 20:12:19.995242 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 13 20:12:19.995338 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:12:19.995431 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:12:19.995522 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:12:19.995625 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:12:19.995712 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 13 20:12:19.995821 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 13 20:12:19.995908 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 13 20:12:19.996008 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 13 20:12:19.996101 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:12:19.996204 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 13 20:12:19.996317 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 13 20:12:19.996650 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:12:19.997285 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:12:19.997393 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 13 20:12:19.997487 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:12:19.997599 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 13 20:12:19.997697 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:12:19.997809 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 13 20:12:19.997903 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 13 20:12:19.997995 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:12:19.998093 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 13 20:12:19.998187 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 13 20:12:19.998279 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:12:19.998381 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 13 20:12:19.998473 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 13 20:12:19.998566 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:12:19.998573 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:12:19.998579 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:12:19.998594 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:12:19.998600 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 13 20:12:19.998609 kernel: Initialise system trusted keyrings Apr 13 20:12:19.998614 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:12:19.998620 kernel: Key type asymmetric registered Apr 13 20:12:19.998625 kernel: Asymmetric key parser 'x509' registered Apr 13 20:12:19.998631 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:12:19.998636 kernel: io scheduler mq-deadline registered Apr 13 20:12:19.998641 kernel: io scheduler kyber registered Apr 13 20:12:19.998647 kernel: io scheduler bfq registered Apr 13 20:12:19.998764 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 13 20:12:19.998864 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 13 20:12:19.998963 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 13 20:12:19.999059 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 13 20:12:19.999155 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 13 20:12:19.999250 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 13 20:12:19.999346 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 13 20:12:19.999441 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 13 20:12:19.999536 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 13 20:12:19.999640 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 13 20:12:19.999738 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 13 20:12:19.999844 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 13 20:12:19.999939 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 13 20:12:20.000034 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 13 20:12:20.000133 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 13 20:12:20.000229 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 13 20:12:20.000236 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:12:20.000330 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 13 20:12:20.000427 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 13 20:12:20.000434 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:12:20.000439 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 13 20:12:20.000445 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:12:20.000450 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:12:20.000456 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:12:20.000462 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:12:20.000467 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:12:20.000473 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:12:20.001085 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:12:20.001192 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:12:20.001287 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:12:19 UTC (1776111139) Apr 13 20:12:20.001382 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:12:20.001392 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:12:20.001401 kernel: efifb: probing for efifb Apr 13 20:12:20.001409 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Apr 13 20:12:20.001424 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 13 20:12:20.001429 kernel: efifb: scrolling: redraw Apr 13 20:12:20.001435 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 20:12:20.001440 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 20:12:20.001446 kernel: fb0: EFI VGA frame buffer device Apr 13 20:12:20.001452 kernel: pstore: Using crash dump compression: deflate Apr 13 20:12:20.001457 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:12:20.001463 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:12:20.001468 kernel: Segment Routing with IPv6 Apr 13 20:12:20.001474 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:12:20.001482 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:12:20.001487 kernel: Key type dns_resolver registered Apr 13 20:12:20.001493 kernel: IPI shorthand broadcast: enabled Apr 13 20:12:20.001499 kernel: sched_clock: Marking stable (1423011428, 218858506)->(1696301323, -54431389) Apr 13 20:12:20.001504 kernel: registered taskstats version 1 Apr 13 20:12:20.001510 kernel: Loading compiled-in X.509 certificates Apr 13 20:12:20.001516 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:12:20.001522 kernel: Key type .fscrypt registered Apr 13 20:12:20.001527 kernel: Key type fscrypt-provisioning registered Apr 13 20:12:20.001537 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:12:20.001546 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:12:20.001556 kernel: ima: No architecture policies found Apr 13 20:12:20.001565 kernel: clk: Disabling unused clocks Apr 13 20:12:20.001573 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:12:20.001578 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:12:20.001594 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:12:20.001600 kernel: Run /init as init process Apr 13 20:12:20.001608 kernel: with arguments: Apr 13 20:12:20.001614 kernel: /init Apr 13 20:12:20.001619 kernel: with environment: Apr 13 20:12:20.001625 kernel: HOME=/ Apr 13 20:12:20.001630 kernel: TERM=linux Apr 13 20:12:20.001638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:12:20.001646 systemd[1]: Detected virtualization kvm. Apr 13 20:12:20.001652 systemd[1]: Detected architecture x86-64. Apr 13 20:12:20.001660 systemd[1]: Running in initrd. Apr 13 20:12:20.001666 systemd[1]: No hostname configured, using default hostname. Apr 13 20:12:20.001671 systemd[1]: Hostname set to . Apr 13 20:12:20.001677 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:12:20.001686 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:12:20.001691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:12:20.001697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:12:20.001704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:12:20.001712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:12:20.001718 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:12:20.001724 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:12:20.001731 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:12:20.001737 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:12:20.001839 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:12:20.001849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:12:20.001855 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:12:20.001861 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:12:20.001867 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:12:20.001873 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:12:20.001878 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:12:20.001884 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:12:20.001890 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:12:20.001896 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:12:20.001904 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:12:20.001910 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:12:20.001916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:12:20.001921 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:12:20.001927 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:12:20.001933 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:12:20.001939 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:12:20.001945 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:12:20.001951 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:12:20.001963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:12:20.001981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:12:20.002015 systemd-journald[188]: Collecting audit messages is disabled. Apr 13 20:12:20.002031 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:12:20.002040 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:12:20.002046 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:12:20.002052 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:12:20.002058 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:12:20.002067 systemd-journald[188]: Journal started Apr 13 20:12:20.002079 systemd-journald[188]: Runtime Journal (/run/log/journal/7b728ee1de0942b7b55b325c14d39831) is 8.0M, max 76.3M, 68.3M free. Apr 13 20:12:20.004054 kernel: Bridge firewalling registered Apr 13 20:12:20.004074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:12:19.977146 systemd-modules-load[189]: Inserted module 'overlay' Apr 13 20:12:20.003465 systemd-modules-load[189]: Inserted module 'br_netfilter' Apr 13 20:12:20.010821 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:12:20.010793 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:12:20.011316 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:12:20.016842 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:12:20.018337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:12:20.021854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:12:20.024893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:12:20.034964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:12:20.039325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:12:20.040395 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:12:20.046879 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:12:20.047432 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:12:20.050874 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:12:20.065879 dracut-cmdline[228]: dracut-dracut-053 Apr 13 20:12:20.070766 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:12:20.075187 systemd-resolved[223]: Positive Trust Anchors: Apr 13 20:12:20.075198 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:12:20.075221 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:12:20.080167 systemd-resolved[223]: Defaulting to hostname 'linux'. Apr 13 20:12:20.081619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:12:20.082521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:12:20.139791 kernel: SCSI subsystem initialized Apr 13 20:12:20.147769 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:12:20.157776 kernel: iscsi: registered transport (tcp) Apr 13 20:12:20.175028 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:12:20.175098 kernel: QLogic iSCSI HBA Driver Apr 13 20:12:20.225566 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:12:20.237976 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:12:20.269648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:12:20.269701 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:12:20.272771 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:12:20.309778 kernel: raid6: avx512x4 gen() 45566 MB/s Apr 13 20:12:20.327809 kernel: raid6: avx512x2 gen() 46896 MB/s Apr 13 20:12:20.345823 kernel: raid6: avx512x1 gen() 43766 MB/s Apr 13 20:12:20.363796 kernel: raid6: avx2x4 gen() 47900 MB/s Apr 13 20:12:20.381794 kernel: raid6: avx2x2 gen() 49485 MB/s Apr 13 20:12:20.400868 kernel: raid6: avx2x1 gen() 40105 MB/s Apr 13 20:12:20.400937 kernel: raid6: using algorithm avx2x2 gen() 49485 MB/s Apr 13 20:12:20.420893 kernel: raid6: .... xor() 37092 MB/s, rmw enabled Apr 13 20:12:20.420964 kernel: raid6: using avx512x2 recovery algorithm Apr 13 20:12:20.437783 kernel: xor: automatically using best checksumming function avx Apr 13 20:12:20.541791 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:12:20.554452 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:12:20.560911 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:12:20.571814 systemd-udevd[410]: Using default interface naming scheme 'v255'. Apr 13 20:12:20.575519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:12:20.581902 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:12:20.593781 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Apr 13 20:12:20.626683 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:12:20.631866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:12:20.701165 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:12:20.708921 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:12:20.725265 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:12:20.726824 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:12:20.727535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:12:20.728634 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:12:20.733917 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:12:20.745731 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:12:20.804774 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:12:20.809211 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:12:20.812766 kernel: ACPI: bus type USB registered Apr 13 20:12:20.830795 kernel: usbcore: registered new interface driver usbfs Apr 13 20:12:20.833786 kernel: usbcore: registered new interface driver hub Apr 13 20:12:20.835760 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:12:20.835786 kernel: libata version 3.00 loaded. Apr 13 20:12:20.840770 kernel: usbcore: registered new device driver usb Apr 13 20:12:20.846360 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:12:20.846486 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:12:20.848131 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:12:20.848445 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:12:20.848624 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:12:20.849823 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:12:20.858989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:12:20.863074 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:12:20.863104 kernel: AES CTR mode by8 optimization enabled Apr 13 20:12:20.870020 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:12:20.870523 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:12:20.881291 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:12:20.881478 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:12:20.881705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:12:20.890812 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:12:20.890999 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:12:20.893017 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 20:12:20.901036 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 20:12:20.901221 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 20:12:20.905830 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 20:12:20.906057 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 20:12:20.910770 kernel: scsi host1: ahci Apr 13 20:12:20.910842 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:12:20.916131 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 13 20:12:20.916317 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 20:12:20.916826 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:12:20.918788 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:12:20.921117 kernel: hub 1-0:1.0: USB hub found Apr 13 20:12:20.921281 kernel: scsi host2: ahci Apr 13 20:12:20.924242 kernel: hub 1-0:1.0: 4 ports detected Apr 13 20:12:20.931350 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:12:20.931516 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 20:12:20.935618 kernel: hub 2-0:1.0: USB hub found Apr 13 20:12:20.935890 kernel: hub 2-0:1.0: 4 ports detected Apr 13 20:12:20.938361 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:12:20.938375 kernel: GPT:17805311 != 160006143 Apr 13 20:12:20.938384 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:12:20.938392 kernel: GPT:17805311 != 160006143 Apr 13 20:12:20.938422 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:12:20.938430 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:12:20.936815 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:12:20.950475 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:12:20.950691 kernel: scsi host3: ahci Apr 13 20:12:20.955779 kernel: scsi host4: ahci Apr 13 20:12:20.956001 kernel: scsi host5: ahci Apr 13 20:12:20.955638 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:12:20.979947 kernel: scsi host6: ahci Apr 13 20:12:20.980141 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 42 Apr 13 20:12:20.980150 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 42 Apr 13 20:12:20.980158 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 42 Apr 13 20:12:20.980166 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 42 Apr 13 20:12:20.980174 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 42 Apr 13 20:12:20.980181 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 42 Apr 13 20:12:21.003149 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (486) Apr 13 20:12:21.008761 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (462) Apr 13 20:12:21.007863 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:12:21.011846 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:12:21.015969 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:12:21.022471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:12:21.025881 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:12:21.026513 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:12:21.031873 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:12:21.036794 disk-uuid[585]: Primary Header is updated. Apr 13 20:12:21.036794 disk-uuid[585]: Secondary Entries is updated. Apr 13 20:12:21.036794 disk-uuid[585]: Secondary Header is updated. Apr 13 20:12:21.041768 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:12:21.047767 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:12:21.053765 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:12:21.171912 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 20:12:21.271775 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:12:21.275793 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 20:12:21.281623 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:12:21.281643 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:12:21.288769 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:12:21.288794 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 20:12:21.288803 kernel: ata1.00: applying bridge limits Apr 13 20:12:21.291777 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:12:21.291843 kernel: ata1.00: configured for UDMA/100 Apr 13 20:12:21.295775 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 20:12:21.312268 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 20:12:21.317846 kernel: usbcore: registered new interface driver usbhid Apr 13 20:12:21.317887 kernel: usbhid: USB HID core driver Apr 13 20:12:21.324817 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 13 20:12:21.324858 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 20:12:21.333574 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 20:12:21.333901 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 20:12:21.346917 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 13 20:12:22.058792 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:12:22.058867 disk-uuid[586]: The operation has completed successfully. Apr 13 20:12:22.134758 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:12:22.134861 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:12:22.146883 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:12:22.150047 sh[607]: Success Apr 13 20:12:22.161771 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:12:22.207431 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:12:22.214835 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:12:22.215807 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:12:22.233267 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:12:22.233317 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:12:22.233327 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:12:22.237784 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:12:22.237795 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:12:22.247772 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:12:22.250260 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:12:22.251150 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:12:22.264920 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:12:22.267852 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:12:22.279935 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:12:22.279962 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:12:22.283560 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:12:22.293427 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:12:22.293453 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:12:22.306964 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:12:22.306760 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:12:22.313951 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:12:22.318890 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:12:22.383505 ignition[711]: Ignition 2.19.0 Apr 13 20:12:22.383515 ignition[711]: Stage: fetch-offline Apr 13 20:12:22.383548 ignition[711]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:12:22.386199 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:12:22.383557 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:12:22.383638 ignition[711]: parsed url from cmdline: "" Apr 13 20:12:22.383642 ignition[711]: no config URL provided Apr 13 20:12:22.383647 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:12:22.383655 ignition[711]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:12:22.383659 ignition[711]: failed to fetch config: resource requires networking Apr 13 20:12:22.384446 ignition[711]: Ignition finished successfully Apr 13 20:12:22.393823 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:12:22.399860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:12:22.417680 systemd-networkd[795]: lo: Link UP Apr 13 20:12:22.417690 systemd-networkd[795]: lo: Gained carrier Apr 13 20:12:22.420255 systemd-networkd[795]: Enumeration completed Apr 13 20:12:22.420355 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:12:22.420977 systemd[1]: Reached target network.target - Network. Apr 13 20:12:22.421567 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:12:22.421571 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:12:22.423276 systemd-networkd[795]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:12:22.423280 systemd-networkd[795]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:12:22.423912 systemd-networkd[795]: eth0: Link UP Apr 13 20:12:22.423916 systemd-networkd[795]: eth0: Gained carrier Apr 13 20:12:22.423922 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:12:22.427946 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:12:22.428871 systemd-networkd[795]: eth1: Link UP Apr 13 20:12:22.428875 systemd-networkd[795]: eth1: Gained carrier Apr 13 20:12:22.428886 systemd-networkd[795]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:12:22.439626 ignition[797]: Ignition 2.19.0 Apr 13 20:12:22.439640 ignition[797]: Stage: fetch Apr 13 20:12:22.439778 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:12:22.439788 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:12:22.439858 ignition[797]: parsed url from cmdline: "" Apr 13 20:12:22.439862 ignition[797]: no config URL provided Apr 13 20:12:22.439867 ignition[797]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:12:22.439875 ignition[797]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:12:22.439890 ignition[797]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 20:12:22.440057 ignition[797]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:12:22.454792 systemd-networkd[795]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 20:12:22.491814 systemd-networkd[795]: eth0: DHCPv4 address 204.168.241.7/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 20:12:22.640303 ignition[797]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 20:12:22.646923 ignition[797]: GET result: OK Apr 13 20:12:22.647042 ignition[797]: parsing config with SHA512: 3bbb1acce2a6b2c1a9d225681ac30af21568d20b10fd6c8baae5166887ea9657d624e3996ed7bd1bf4d5a0e612dde6b69618925dd152b757551d79b7e4fd2a63 Apr 13 20:12:22.653098 unknown[797]: fetched base config from "system" Apr 13 20:12:22.653118 unknown[797]: fetched base config from "system" Apr 13 20:12:22.653140 unknown[797]: fetched user config from "hetzner" Apr 13 20:12:22.655897 ignition[797]: fetch: fetch complete Apr 13 20:12:22.655968 ignition[797]: fetch: fetch passed Apr 13 20:12:22.657219 ignition[797]: Ignition finished successfully Apr 13 20:12:22.661432 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:12:22.670023 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:12:22.705897 ignition[805]: Ignition 2.19.0 Apr 13 20:12:22.705914 ignition[805]: Stage: kargs Apr 13 20:12:22.706141 ignition[805]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:12:22.706159 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:12:22.709393 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:12:22.707102 ignition[805]: kargs: kargs passed Apr 13 20:12:22.707167 ignition[805]: Ignition finished successfully Apr 13 20:12:22.716973 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:12:22.733645 ignition[812]: Ignition 2.19.0 Apr 13 20:12:22.733664 ignition[812]: Stage: disks Apr 13 20:12:22.734519 ignition[812]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:12:22.734532 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:12:22.736136 ignition[812]: disks: disks passed Apr 13 20:12:22.736194 ignition[812]: Ignition finished successfully Apr 13 20:12:22.738170 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:12:22.739713 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:12:22.740378 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:12:22.741128 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:12:22.742173 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:12:22.743203 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:12:22.752977 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:12:22.773850 systemd-fsck[820]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:12:22.777048 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:12:22.783922 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:12:22.862806 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:12:22.862967 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:12:22.863817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:12:22.869812 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:12:22.871341 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:12:22.874873 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 20:12:22.875521 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:12:22.875542 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:12:22.885863 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (828) Apr 13 20:12:22.889761 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:12:22.889846 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:12:22.894946 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:12:22.894971 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:12:22.899881 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:12:22.905350 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:12:22.905375 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:12:22.908851 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:12:22.937767 coreos-metadata[830]: Apr 13 20:12:22.937 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 20:12:22.939090 coreos-metadata[830]: Apr 13 20:12:22.938 INFO Fetch successful Apr 13 20:12:22.939662 coreos-metadata[830]: Apr 13 20:12:22.939 INFO wrote hostname ci-4081-3-7-c-b0ece174b2 to /sysroot/etc/hostname Apr 13 20:12:22.941740 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 20:12:22.943628 initrd-setup-root[855]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:12:22.947716 initrd-setup-root[863]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:12:22.951506 initrd-setup-root[870]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:12:22.955234 initrd-setup-root[877]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:12:23.037822 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:12:23.042834 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:12:23.044865 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:12:23.052775 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:12:23.074834 ignition[949]: INFO : Ignition 2.19.0 Apr 13 20:12:23.074834 ignition[949]: INFO : Stage: mount Apr 13 20:12:23.074834 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:12:23.074834 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:12:23.077033 ignition[949]: INFO : mount: mount passed Apr 13 20:12:23.077033 ignition[949]: INFO : Ignition finished successfully Apr 13 20:12:23.077040 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:12:23.080853 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:12:23.083014 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:12:23.229312 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:12:23.233880 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:12:23.266805 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (961) Apr 13 20:12:23.274303 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:12:23.274352 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:12:23.279837 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:12:23.295825 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:12:23.295905 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:12:23.300718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:12:23.349539 ignition[977]: INFO : Ignition 2.19.0 Apr 13 20:12:23.349539 ignition[977]: INFO : Stage: files Apr 13 20:12:23.349539 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:12:23.349539 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:12:23.349539 ignition[977]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:12:23.353861 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:12:23.353861 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:12:23.356194 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:12:23.357157 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:12:23.358500 unknown[977]: wrote ssh authorized keys file for user: core Apr 13 20:12:23.359717 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:12:23.361171 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:12:23.362654 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:12:23.566553 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:12:23.775957 systemd-networkd[795]: eth0: Gained IPv6LL Apr 13 20:12:23.840149 systemd-networkd[795]: eth1: Gained IPv6LL Apr 13 20:12:23.872874 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:12:23.872874 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:12:23.876646 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 20:12:24.140129 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:12:24.456248 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:12:24.456248 ignition[977]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:12:24.459652 ignition[977]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:12:24.459652 ignition[977]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:12:24.459652 ignition[977]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:12:24.459652 ignition[977]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 20:12:24.459652 ignition[977]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:12:24.459652 ignition[977]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:12:24.459652 ignition[977]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 20:12:24.459652 ignition[977]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:12:24.459652 ignition[977]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:12:24.459652 ignition[977]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:12:24.459652 ignition[977]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:12:24.459652 ignition[977]: INFO : files: files passed Apr 13 20:12:24.459652 ignition[977]: INFO : Ignition finished successfully Apr 13 20:12:24.459873 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:12:24.467009 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:12:24.469050 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:12:24.472775 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:12:24.473927 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:12:24.484288 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:12:24.484934 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:12:24.485627 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:12:24.487042 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:12:24.488105 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:12:24.491891 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:12:24.511842 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:12:24.511945 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:12:24.513287 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:12:24.514175 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:12:24.514725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:12:24.522955 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:12:24.533581 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:12:24.543096 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:12:24.552043 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:12:24.553011 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:12:24.553915 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:12:24.554726 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:12:24.554847 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:12:24.556173 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:12:24.556845 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:12:24.557782 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:12:24.558558 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:12:24.559430 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:12:24.560277 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:12:24.561082 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:12:24.561780 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:12:24.562450 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:12:24.563166 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:12:24.563869 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:12:24.563977 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:12:24.565084 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:12:24.565965 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:12:24.566706 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:12:24.567460 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:12:24.567867 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:12:24.567949 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:12:24.569005 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:12:24.569086 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:12:24.569766 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:12:24.569835 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:12:24.570445 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 20:12:24.570513 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 20:12:24.575856 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:12:24.576237 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:12:24.576339 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:12:24.579911 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:12:24.580269 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:12:24.580374 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:12:24.581105 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:12:24.581202 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:12:24.589504 ignition[1031]: INFO : Ignition 2.19.0 Apr 13 20:12:24.589504 ignition[1031]: INFO : Stage: umount Apr 13 20:12:24.593470 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:12:24.593470 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:12:24.593470 ignition[1031]: INFO : umount: umount passed Apr 13 20:12:24.593470 ignition[1031]: INFO : Ignition finished successfully Apr 13 20:12:24.591210 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:12:24.591305 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:12:24.592003 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:12:24.592082 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:12:24.595822 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:12:24.595895 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:12:24.597010 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:12:24.597052 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:12:24.598890 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:12:24.598929 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:12:24.599581 systemd[1]: Stopped target network.target - Network. Apr 13 20:12:24.600800 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:12:24.600846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:12:24.601190 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:12:24.601496 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:12:24.602517 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:12:24.603180 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:12:24.603770 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:12:24.604155 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:12:24.604525 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:12:24.605216 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:12:24.605255 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:12:24.605927 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:12:24.605966 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:12:24.606627 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:12:24.606662 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:12:24.609997 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:12:24.610420 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:12:24.611634 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:12:24.614798 systemd-networkd[795]: eth1: DHCPv6 lease lost Apr 13 20:12:24.619388 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:12:24.619496 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:12:24.620794 systemd-networkd[795]: eth0: DHCPv6 lease lost Apr 13 20:12:24.621902 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:12:24.621947 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:12:24.622641 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:12:24.622772 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:12:24.624277 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:12:24.624342 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:12:24.629824 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:12:24.630479 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:12:24.630853 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:12:24.631655 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:12:24.632023 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:12:24.633442 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:12:24.633484 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:12:24.635237 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:12:24.644878 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:12:24.644992 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:12:24.646886 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:12:24.647358 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:12:24.648533 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:12:24.648587 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:12:24.650149 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:12:24.650300 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:12:24.651431 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:12:24.651492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:12:24.652213 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:12:24.652248 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:12:24.652860 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:12:24.652899 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:12:24.653897 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:12:24.653943 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:12:24.654881 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:12:24.654932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:12:24.661931 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:12:24.662263 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:12:24.662305 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:12:24.662677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:12:24.662710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:12:24.667524 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:12:24.667987 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:12:24.669047 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:12:24.670401 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:12:24.689242 systemd[1]: Switching root. Apr 13 20:12:24.718038 systemd-journald[188]: Journal stopped Apr 13 20:12:25.779847 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Apr 13 20:12:25.779929 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:12:25.779946 kernel: SELinux: policy capability open_perms=1 Apr 13 20:12:25.779960 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:12:25.779972 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:12:25.779988 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:12:25.779997 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:12:25.780005 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:12:25.780017 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:12:25.780029 kernel: audit: type=1403 audit(1776111144.879:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:12:25.780039 systemd[1]: Successfully loaded SELinux policy in 56.307ms. Apr 13 20:12:25.780062 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.274ms. Apr 13 20:12:25.780086 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:12:25.780102 systemd[1]: Detected virtualization kvm. Apr 13 20:12:25.780117 systemd[1]: Detected architecture x86-64. Apr 13 20:12:25.780131 systemd[1]: Detected first boot. Apr 13 20:12:25.780326 systemd[1]: Hostname set to . Apr 13 20:12:25.780340 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:12:25.780351 zram_generator::config[1074]: No configuration found. Apr 13 20:12:25.780365 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:12:25.780374 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:12:25.780383 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:12:25.780393 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:12:25.780402 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:12:25.780411 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:12:25.780420 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:12:25.780433 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:12:25.780444 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:12:25.780453 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:12:25.780462 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:12:25.780471 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:12:25.780480 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:12:25.780489 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:12:25.780498 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:12:25.780506 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:12:25.780517 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:12:25.780528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:12:25.780537 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:12:25.780546 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:12:25.780555 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:12:25.780564 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:12:25.780577 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:12:25.780596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:12:25.780617 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:12:25.780629 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:12:25.780641 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:12:25.780654 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:12:25.780666 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:12:25.780677 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:12:25.780690 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:12:25.780702 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:12:25.780718 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:12:25.780731 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:12:25.780764 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:12:25.780773 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:12:25.780782 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:12:25.780791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:12:25.780800 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:12:25.780808 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:12:25.780817 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:12:25.780829 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:12:25.780838 systemd[1]: Reached target machines.target - Containers. Apr 13 20:12:25.780846 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:12:25.780855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:12:25.780865 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:12:25.780874 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:12:25.780883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:12:25.780892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:12:25.780903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:12:25.780912 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:12:25.780921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:12:25.780930 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:12:25.780939 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:12:25.780948 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:12:25.780957 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:12:25.780968 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:12:25.780983 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:12:25.780996 kernel: ACPI: bus type drm_connector registered Apr 13 20:12:25.781008 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:12:25.781021 kernel: fuse: init (API version 7.39) Apr 13 20:12:25.781032 kernel: loop: module loaded Apr 13 20:12:25.781045 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:12:25.781059 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:12:25.781072 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:12:25.781081 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:12:25.781095 systemd[1]: Stopped verity-setup.service. Apr 13 20:12:25.781108 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:12:25.781140 systemd-journald[1161]: Collecting audit messages is disabled. Apr 13 20:12:25.781164 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:12:25.781176 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:12:25.781185 systemd-journald[1161]: Journal started Apr 13 20:12:25.781203 systemd-journald[1161]: Runtime Journal (/run/log/journal/7b728ee1de0942b7b55b325c14d39831) is 8.0M, max 76.3M, 68.3M free. Apr 13 20:12:25.453382 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:12:25.478180 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:12:25.478644 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:12:25.787324 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:12:25.786032 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:12:25.786490 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:12:25.786989 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:12:25.787547 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:12:25.788154 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:12:25.788860 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:12:25.789518 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:12:25.789720 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:12:25.790408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:12:25.790595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:12:25.791269 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:12:25.791453 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:12:25.792220 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:12:25.792408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:12:25.793072 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:12:25.793259 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:12:25.794095 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:12:25.794276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:12:25.795118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:12:25.795755 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:12:25.796378 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:12:25.809364 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:12:25.816291 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:12:25.819811 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:12:25.820177 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:12:25.820202 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:12:25.822325 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:12:25.830364 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:12:25.833945 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:12:25.834838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:12:25.836372 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:12:25.845028 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:12:25.845849 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:12:25.853509 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:12:25.853942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:12:25.860659 systemd-journald[1161]: Time spent on flushing to /var/log/journal/7b728ee1de0942b7b55b325c14d39831 is 17.806ms for 1173 entries. Apr 13 20:12:25.860659 systemd-journald[1161]: System Journal (/var/log/journal/7b728ee1de0942b7b55b325c14d39831) is 8.0M, max 584.8M, 576.8M free. Apr 13 20:12:25.892144 systemd-journald[1161]: Received client request to flush runtime journal. Apr 13 20:12:25.861640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:12:25.865874 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:12:25.868953 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:12:25.870874 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:12:25.873116 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:12:25.873737 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:12:25.893990 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:12:25.907863 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:12:25.909113 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:12:25.919291 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:12:25.942769 kernel: loop0: detected capacity change from 0 to 8 Apr 13 20:12:25.952788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:12:25.968989 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:12:25.974517 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:12:25.977598 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:12:25.997855 kernel: loop1: detected capacity change from 0 to 142488 Apr 13 20:12:26.013906 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:12:26.025910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:12:26.066478 kernel: loop2: detected capacity change from 0 to 140768 Apr 13 20:12:26.112727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:12:26.113963 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 13 20:12:26.114928 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 13 20:12:26.120907 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:12:26.123838 kernel: loop3: detected capacity change from 0 to 228704 Apr 13 20:12:26.130579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:12:26.136474 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 20:12:26.167882 kernel: loop4: detected capacity change from 0 to 8 Apr 13 20:12:26.175773 kernel: loop5: detected capacity change from 0 to 142488 Apr 13 20:12:26.195776 kernel: loop6: detected capacity change from 0 to 140768 Apr 13 20:12:26.217170 kernel: loop7: detected capacity change from 0 to 228704 Apr 13 20:12:26.236962 (sd-merge)[1219]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 20:12:26.237548 (sd-merge)[1219]: Merged extensions into '/usr'. Apr 13 20:12:26.244504 systemd[1]: Reloading requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:12:26.244622 systemd[1]: Reloading... Apr 13 20:12:26.314974 zram_generator::config[1242]: No configuration found. Apr 13 20:12:26.339738 ldconfig[1189]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:12:26.436251 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:12:26.472967 systemd[1]: Reloading finished in 227 ms. Apr 13 20:12:26.506175 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:12:26.507022 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:12:26.507730 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:12:26.516905 systemd[1]: Starting ensure-sysext.service... Apr 13 20:12:26.519884 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:12:26.523508 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:12:26.526831 systemd[1]: Reloading requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:12:26.526842 systemd[1]: Reloading... Apr 13 20:12:26.553865 systemd-udevd[1291]: Using default interface naming scheme 'v255'. Apr 13 20:12:26.566510 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:12:26.568670 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:12:26.569689 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:12:26.572546 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Apr 13 20:12:26.572881 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Apr 13 20:12:26.579764 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:12:26.579778 systemd-tmpfiles[1290]: Skipping /boot Apr 13 20:12:26.599303 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:12:26.599420 systemd-tmpfiles[1290]: Skipping /boot Apr 13 20:12:26.615770 zram_generator::config[1319]: No configuration found. Apr 13 20:12:26.767776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 20:12:26.781782 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:12:26.790118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:12:26.799777 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1323) Apr 13 20:12:26.846333 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:12:26.847056 systemd[1]: Reloading finished in 319 ms. Apr 13 20:12:26.866695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:12:26.873768 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:12:26.869328 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:12:26.899521 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 20:12:26.913114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:12:26.920807 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:12:26.921113 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:12:26.924957 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:12:26.925543 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:12:26.935980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:12:26.940019 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:12:26.955835 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 20:12:26.955909 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 13 20:12:26.950972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:12:26.951956 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:12:26.956984 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:12:26.957938 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 20:12:26.958597 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 20:12:26.958813 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 20:12:26.975079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:12:26.984989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:12:26.999153 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:12:27.000087 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:12:27.001197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:12:27.002342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:12:27.008793 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 13 20:12:27.010256 kernel: Console: switching to colour dummy device 80x25 Apr 13 20:12:27.014798 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 13 20:12:27.018786 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 20:12:27.018838 kernel: [drm] features: -context_init Apr 13 20:12:27.021623 kernel: [drm] number of scanouts: 1 Apr 13 20:12:27.021677 kernel: [drm] number of cap sets: 0 Apr 13 20:12:27.026793 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 20:12:27.026264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:12:27.026460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:12:27.038006 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 13 20:12:27.038082 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 20:12:27.047079 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 20:12:27.047367 systemd[1]: Finished ensure-sysext.service. Apr 13 20:12:27.050006 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:12:27.055239 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:12:27.057649 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:12:27.058389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:12:27.064436 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:12:27.064660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:12:27.071667 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:12:27.074866 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:12:27.075049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:12:27.083981 augenrules[1434]: No rules Apr 13 20:12:27.085621 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:12:27.086457 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:12:27.089864 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 20:12:27.098977 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:12:27.101005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:12:27.101875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:12:27.104832 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:12:27.105235 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:12:27.105673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:12:27.105812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:12:27.106194 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:12:27.106313 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:12:27.106654 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:12:27.120591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:12:27.132954 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:12:27.137004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:12:27.137179 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:12:27.144912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:12:27.151579 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:12:27.153894 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:12:27.158732 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:12:27.160777 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:12:27.162777 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:12:27.162908 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:12:27.177763 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:12:27.201132 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:12:27.203065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:12:27.209883 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:12:27.218874 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:12:27.251574 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:12:27.265508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:12:27.283467 systemd-networkd[1414]: lo: Link UP Apr 13 20:12:27.283718 systemd-resolved[1415]: Positive Trust Anchors: Apr 13 20:12:27.283730 systemd-resolved[1415]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:12:27.283768 systemd-resolved[1415]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:12:27.284185 systemd-networkd[1414]: lo: Gained carrier Apr 13 20:12:27.286623 systemd-networkd[1414]: Enumeration completed Apr 13 20:12:27.286989 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:12:27.289294 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:12:27.289305 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:12:27.290105 systemd-networkd[1414]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:12:27.290115 systemd-networkd[1414]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:12:27.290633 systemd-networkd[1414]: eth0: Link UP Apr 13 20:12:27.290643 systemd-networkd[1414]: eth0: Gained carrier Apr 13 20:12:27.290653 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:12:27.291915 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:12:27.293622 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 20:12:27.295546 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:12:27.295717 systemd-networkd[1414]: eth1: Link UP Apr 13 20:12:27.295721 systemd-networkd[1414]: eth1: Gained carrier Apr 13 20:12:27.295732 systemd-networkd[1414]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:12:27.298340 systemd-resolved[1415]: Using system hostname 'ci-4081-3-7-c-b0ece174b2'. Apr 13 20:12:27.301167 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:12:27.301782 systemd[1]: Reached target network.target - Network. Apr 13 20:12:27.302164 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:12:27.302558 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:12:27.304680 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:12:27.305183 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:12:27.306075 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:12:27.308072 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:12:27.308462 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:12:27.310224 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:12:27.310256 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:12:27.310635 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:12:27.312798 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:12:27.316959 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:12:27.322966 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:12:27.324758 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:12:27.327367 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:12:27.327836 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:12:27.328279 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:12:27.328307 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:12:27.329783 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:12:27.334895 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:12:27.336885 systemd-networkd[1414]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 20:12:27.339082 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Apr 13 20:12:27.344933 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:12:27.348828 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:12:27.354895 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:12:27.355286 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:12:27.356872 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:12:27.360821 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:12:27.374882 dbus-daemon[1479]: [system] SELinux support is enabled Apr 13 20:12:27.376122 coreos-metadata[1477]: Apr 13 20:12:27.367 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 20:12:27.376122 coreos-metadata[1477]: Apr 13 20:12:27.367 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata) Apr 13 20:12:27.366880 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 20:12:27.369631 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:12:27.373859 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:12:27.380932 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:12:27.381911 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:12:27.382320 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:12:27.387878 systemd-networkd[1414]: eth0: DHCPv4 address 204.168.241.7/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 20:12:27.391317 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Apr 13 20:12:27.391937 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:12:27.393829 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:12:27.397733 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:12:27.413257 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:12:27.413988 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:12:27.417270 jq[1491]: true Apr 13 20:12:27.426553 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:12:27.426582 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:12:27.428092 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:12:27.428107 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:12:27.443069 jq[1480]: false Apr 13 20:12:27.441647 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:12:27.442806 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:12:27.451101 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:12:27.460798 jq[1498]: true Apr 13 20:12:27.462607 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:12:27.462819 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:12:27.470162 update_engine[1490]: I20260413 20:12:27.465873 1490 main.cc:92] Flatcar Update Engine starting Apr 13 20:12:27.472937 tar[1496]: linux-amd64/LICENSE Apr 13 20:12:27.472937 tar[1496]: linux-amd64/helm Apr 13 20:12:27.472524 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:12:27.475728 update_engine[1490]: I20260413 20:12:27.475679 1490 update_check_scheduler.cc:74] Next update check in 8m27s Apr 13 20:12:27.478686 extend-filesystems[1483]: Found loop4 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found loop5 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found loop6 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found loop7 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found sda Apr 13 20:12:27.478686 extend-filesystems[1483]: Found sda1 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found sda2 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found sda3 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found usr Apr 13 20:12:27.478686 extend-filesystems[1483]: Found sda4 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found sda6 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found sda7 Apr 13 20:12:27.478686 extend-filesystems[1483]: Found sda9 Apr 13 20:12:27.478686 extend-filesystems[1483]: Checking size of /dev/sda9 Apr 13 20:12:27.484908 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:12:27.529944 extend-filesystems[1483]: Resized partition /dev/sda9 Apr 13 20:12:27.537475 extend-filesystems[1532]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:12:27.543767 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 13 20:12:27.556795 systemd-logind[1489]: New seat seat0. Apr 13 20:12:27.563140 systemd-logind[1489]: Watching system buttons on /dev/input/event2 (Power Button) Apr 13 20:12:27.563194 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:12:27.563377 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:12:27.612932 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1324) Apr 13 20:12:27.640992 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:12:27.666417 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:12:27.668273 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:12:27.682336 systemd[1]: Starting sshkeys.service... Apr 13 20:12:27.721259 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:12:27.730581 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:12:27.756781 containerd[1502]: time="2026-04-13T20:12:27.755217447Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:12:27.773374 coreos-metadata[1550]: Apr 13 20:12:27.773 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 20:12:27.777706 coreos-metadata[1550]: Apr 13 20:12:27.777 INFO Fetch successful Apr 13 20:12:27.782336 unknown[1550]: wrote ssh authorized keys file for user: core Apr 13 20:12:27.792350 containerd[1502]: time="2026-04-13T20:12:27.792293478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:12:27.798380 containerd[1502]: time="2026-04-13T20:12:27.797514753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:12:27.798380 containerd[1502]: time="2026-04-13T20:12:27.798080743Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:12:27.798380 containerd[1502]: time="2026-04-13T20:12:27.798093503Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:12:27.798380 containerd[1502]: time="2026-04-13T20:12:27.798264803Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:12:27.798380 containerd[1502]: time="2026-04-13T20:12:27.798275053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:12:27.798380 containerd[1502]: time="2026-04-13T20:12:27.798321843Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:12:27.798380 containerd[1502]: time="2026-04-13T20:12:27.798329543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:12:27.798674 containerd[1502]: time="2026-04-13T20:12:27.798659494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:12:27.798713 containerd[1502]: time="2026-04-13T20:12:27.798705014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:12:27.798845 containerd[1502]: time="2026-04-13T20:12:27.798833034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:12:27.799098 containerd[1502]: time="2026-04-13T20:12:27.798882764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:12:27.799191 containerd[1502]: time="2026-04-13T20:12:27.799071494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:12:27.800005 containerd[1502]: time="2026-04-13T20:12:27.799795115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:12:27.800132 containerd[1502]: time="2026-04-13T20:12:27.800119165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:12:27.800728 containerd[1502]: time="2026-04-13T20:12:27.800262495Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:12:27.800728 containerd[1502]: time="2026-04-13T20:12:27.800347325Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:12:27.800728 containerd[1502]: time="2026-04-13T20:12:27.800384655Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:12:27.815711 containerd[1502]: time="2026-04-13T20:12:27.813178146Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:12:27.815711 containerd[1502]: time="2026-04-13T20:12:27.813409596Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:12:27.815711 containerd[1502]: time="2026-04-13T20:12:27.813424106Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:12:27.815711 containerd[1502]: time="2026-04-13T20:12:27.813640756Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:12:27.815711 containerd[1502]: time="2026-04-13T20:12:27.813657586Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:12:27.817216 containerd[1502]: time="2026-04-13T20:12:27.816497958Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:12:27.817216 containerd[1502]: time="2026-04-13T20:12:27.816680159Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:12:27.817641 containerd[1502]: time="2026-04-13T20:12:27.816780669Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:12:27.817692 containerd[1502]: time="2026-04-13T20:12:27.817681089Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:12:27.817764 containerd[1502]: time="2026-04-13T20:12:27.817740129Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.817831950Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.817846430Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.817856010Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.817867270Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.817877860Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.817887590Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.818097080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.818107600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.818124110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.818134800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.818154750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.818165050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.818283810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.820857 containerd[1502]: time="2026-04-13T20:12:27.818293800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818302360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818312090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818320930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818331440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818339600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818354370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818362980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818373860Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818390850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818398610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818405830Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818580920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818596430Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:12:27.821056 containerd[1502]: time="2026-04-13T20:12:27.818605660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:12:27.827573 containerd[1502]: time="2026-04-13T20:12:27.818622930Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:12:27.827573 containerd[1502]: time="2026-04-13T20:12:27.818703950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.827573 containerd[1502]: time="2026-04-13T20:12:27.818714110Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:12:27.827573 containerd[1502]: time="2026-04-13T20:12:27.818721890Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:12:27.827573 containerd[1502]: time="2026-04-13T20:12:27.818730100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:12:27.826492 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.818998261Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.819039911Z" level=info msg="Connect containerd service" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.819088431Z" level=info msg="using legacy CRI server" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.819094541Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.819186691Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.824726385Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.824986225Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.825025526Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.825058056Z" level=info msg="Start subscribing containerd event" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.825083136Z" level=info msg="Start recovering state" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.825130046Z" level=info msg="Start event monitor" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.825149756Z" level=info msg="Start snapshots syncer" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.825156326Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.825161966Z" level=info msg="Start streaming server" Apr 13 20:12:27.827873 containerd[1502]: time="2026-04-13T20:12:27.825203046Z" level=info msg="containerd successfully booted in 0.072123s" Apr 13 20:12:27.837647 update-ssh-keys[1555]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:12:27.838671 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:12:27.843719 systemd[1]: Finished sshkeys.service. Apr 13 20:12:27.858057 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 13 20:12:27.876581 extend-filesystems[1532]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:12:27.876581 extend-filesystems[1532]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 13 20:12:27.876581 extend-filesystems[1532]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 13 20:12:27.883200 extend-filesystems[1483]: Resized filesystem in /dev/sda9 Apr 13 20:12:27.883200 extend-filesystems[1483]: Found sr0 Apr 13 20:12:27.878900 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:12:27.886042 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:12:27.879095 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:12:27.902690 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:12:27.914195 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:12:27.924000 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:12:27.924256 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:12:27.933862 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:12:27.943766 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:12:27.955227 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:12:27.964035 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:12:27.964455 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:12:28.151002 tar[1496]: linux-amd64/README.md Apr 13 20:12:28.163839 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:12:28.367647 coreos-metadata[1477]: Apr 13 20:12:28.367 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #2 Apr 13 20:12:28.368459 coreos-metadata[1477]: Apr 13 20:12:28.368 INFO Fetch successful Apr 13 20:12:28.369023 coreos-metadata[1477]: Apr 13 20:12:28.368 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 20:12:28.369536 coreos-metadata[1477]: Apr 13 20:12:28.369 INFO Fetch successful Apr 13 20:12:28.447955 systemd-networkd[1414]: eth0: Gained IPv6LL Apr 13 20:12:28.449908 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Apr 13 20:12:28.453208 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:12:28.457554 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:12:28.466008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:28.482238 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:12:28.485138 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:12:28.491528 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:12:28.522740 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:12:28.768171 systemd-networkd[1414]: eth1: Gained IPv6LL Apr 13 20:12:28.770683 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Apr 13 20:12:29.389871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:29.390383 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:12:29.390684 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:12:29.393107 systemd[1]: Startup finished in 1.567s (kernel) + 5.112s (initrd) + 4.568s (userspace) = 11.248s. Apr 13 20:12:29.831392 kubelet[1608]: E0413 20:12:29.831244 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:12:29.834285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:12:29.834486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:12:30.540882 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:12:30.548159 systemd[1]: Started sshd@0-204.168.241.7:22-20.229.252.112:45950.service - OpenSSH per-connection server daemon (20.229.252.112:45950). Apr 13 20:12:30.783800 sshd[1620]: Accepted publickey for core from 20.229.252.112 port 45950 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:12:30.786618 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:30.801496 systemd-logind[1489]: New session 1 of user core. Apr 13 20:12:30.804601 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:12:30.817159 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:12:30.842818 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:12:30.852806 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:12:30.867174 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:12:30.973721 systemd[1624]: Queued start job for default target default.target. Apr 13 20:12:30.981791 systemd[1624]: Created slice app.slice - User Application Slice. Apr 13 20:12:30.981811 systemd[1624]: Reached target paths.target - Paths. Apr 13 20:12:30.981823 systemd[1624]: Reached target timers.target - Timers. Apr 13 20:12:30.983275 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:12:31.004655 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:12:31.004773 systemd[1624]: Reached target sockets.target - Sockets. Apr 13 20:12:31.004785 systemd[1624]: Reached target basic.target - Basic System. Apr 13 20:12:31.004820 systemd[1624]: Reached target default.target - Main User Target. Apr 13 20:12:31.004851 systemd[1624]: Startup finished in 126ms. Apr 13 20:12:31.005122 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:12:31.013888 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:12:31.198078 systemd[1]: Started sshd@1-204.168.241.7:22-20.229.252.112:45964.service - OpenSSH per-connection server daemon (20.229.252.112:45964). Apr 13 20:12:31.397079 sshd[1635]: Accepted publickey for core from 20.229.252.112 port 45964 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:12:31.399845 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:31.405690 systemd-logind[1489]: New session 2 of user core. Apr 13 20:12:31.413021 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:12:31.571307 sshd[1635]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:31.578004 systemd-logind[1489]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:12:31.579673 systemd[1]: sshd@1-204.168.241.7:22-20.229.252.112:45964.service: Deactivated successfully. Apr 13 20:12:31.583235 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:12:31.584671 systemd-logind[1489]: Removed session 2. Apr 13 20:12:31.617101 systemd[1]: Started sshd@2-204.168.241.7:22-20.229.252.112:45968.service - OpenSSH per-connection server daemon (20.229.252.112:45968). Apr 13 20:12:31.841262 sshd[1642]: Accepted publickey for core from 20.229.252.112 port 45968 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:12:31.844959 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:31.853449 systemd-logind[1489]: New session 3 of user core. Apr 13 20:12:31.864056 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:12:32.008562 sshd[1642]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:32.015249 systemd-logind[1489]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:12:32.015847 systemd[1]: sshd@2-204.168.241.7:22-20.229.252.112:45968.service: Deactivated successfully. Apr 13 20:12:32.019374 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:12:32.021425 systemd-logind[1489]: Removed session 3. Apr 13 20:12:32.053738 systemd[1]: Started sshd@3-204.168.241.7:22-20.229.252.112:45982.service - OpenSSH per-connection server daemon (20.229.252.112:45982). Apr 13 20:12:32.282389 sshd[1649]: Accepted publickey for core from 20.229.252.112 port 45982 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:12:32.283553 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:32.292711 systemd-logind[1489]: New session 4 of user core. Apr 13 20:12:32.298042 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:12:32.453818 sshd[1649]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:32.459380 systemd[1]: sshd@3-204.168.241.7:22-20.229.252.112:45982.service: Deactivated successfully. Apr 13 20:12:32.462313 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:12:32.466287 systemd-logind[1489]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:12:32.468157 systemd-logind[1489]: Removed session 4. Apr 13 20:12:32.523244 systemd[1]: Started sshd@4-204.168.241.7:22-20.229.252.112:45984.service - OpenSSH per-connection server daemon (20.229.252.112:45984). Apr 13 20:12:32.732165 sshd[1656]: Accepted publickey for core from 20.229.252.112 port 45984 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:12:32.734913 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:32.742712 systemd-logind[1489]: New session 5 of user core. Apr 13 20:12:32.750970 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:12:32.886820 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:12:32.887497 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:12:32.908742 sudo[1659]: pam_unix(sudo:session): session closed for user root Apr 13 20:12:32.941314 sshd[1656]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:32.946997 systemd[1]: sshd@4-204.168.241.7:22-20.229.252.112:45984.service: Deactivated successfully. Apr 13 20:12:32.950457 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:12:32.952783 systemd-logind[1489]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:12:32.954972 systemd-logind[1489]: Removed session 5. Apr 13 20:12:32.994435 systemd[1]: Started sshd@5-204.168.241.7:22-20.229.252.112:45996.service - OpenSSH per-connection server daemon (20.229.252.112:45996). Apr 13 20:12:33.219974 sshd[1664]: Accepted publickey for core from 20.229.252.112 port 45996 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:12:33.222863 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:33.230875 systemd-logind[1489]: New session 6 of user core. Apr 13 20:12:33.235962 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:12:33.363242 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:12:33.364026 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:12:33.370837 sudo[1668]: pam_unix(sudo:session): session closed for user root Apr 13 20:12:33.382670 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:12:33.383433 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:12:33.404144 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:12:33.419735 auditctl[1671]: No rules Apr 13 20:12:33.420706 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:12:33.421157 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:12:33.429849 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:12:33.490008 augenrules[1689]: No rules Apr 13 20:12:33.491247 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:12:33.494985 sudo[1667]: pam_unix(sudo:session): session closed for user root Apr 13 20:12:33.527944 sshd[1664]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:33.532979 systemd[1]: sshd@5-204.168.241.7:22-20.229.252.112:45996.service: Deactivated successfully. Apr 13 20:12:33.536773 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:12:33.539596 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:12:33.542179 systemd-logind[1489]: Removed session 6. Apr 13 20:12:33.576110 systemd[1]: Started sshd@6-204.168.241.7:22-20.229.252.112:46008.service - OpenSSH per-connection server daemon (20.229.252.112:46008). Apr 13 20:12:33.782361 sshd[1697]: Accepted publickey for core from 20.229.252.112 port 46008 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:12:33.785882 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:33.793979 systemd-logind[1489]: New session 7 of user core. Apr 13 20:12:33.810007 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:12:33.933179 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:12:33.933957 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:12:34.245933 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:12:34.263567 (dockerd)[1716]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:12:34.505466 dockerd[1716]: time="2026-04-13T20:12:34.505347861Z" level=info msg="Starting up" Apr 13 20:12:34.590569 dockerd[1716]: time="2026-04-13T20:12:34.590527232Z" level=info msg="Loading containers: start." Apr 13 20:12:34.696806 kernel: Initializing XFRM netlink socket Apr 13 20:12:34.718611 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Apr 13 20:12:34.767944 systemd-networkd[1414]: docker0: Link UP Apr 13 20:12:34.768340 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Apr 13 20:12:34.788160 dockerd[1716]: time="2026-04-13T20:12:34.788115476Z" level=info msg="Loading containers: done." Apr 13 20:12:34.807761 dockerd[1716]: time="2026-04-13T20:12:34.807705633Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:12:34.807926 dockerd[1716]: time="2026-04-13T20:12:34.807814673Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:12:34.807926 dockerd[1716]: time="2026-04-13T20:12:34.807909583Z" level=info msg="Daemon has completed initialization" Apr 13 20:12:34.837513 dockerd[1716]: time="2026-04-13T20:12:34.837451537Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:12:34.838047 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:12:35.328133 containerd[1502]: time="2026-04-13T20:12:35.328052026Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 20:12:35.971117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884162288.mount: Deactivated successfully. Apr 13 20:12:36.964858 containerd[1502]: time="2026-04-13T20:12:36.964805590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:36.966028 containerd[1502]: time="2026-04-13T20:12:36.965892990Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29989519" Apr 13 20:12:36.967121 containerd[1502]: time="2026-04-13T20:12:36.966889711Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:36.968989 containerd[1502]: time="2026-04-13T20:12:36.968967313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:36.969686 containerd[1502]: time="2026-04-13T20:12:36.969665794Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 1.641560358s" Apr 13 20:12:36.969765 containerd[1502]: time="2026-04-13T20:12:36.969739444Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 20:12:36.970233 containerd[1502]: time="2026-04-13T20:12:36.970208164Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 20:12:38.122615 containerd[1502]: time="2026-04-13T20:12:38.122535344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:38.124149 containerd[1502]: time="2026-04-13T20:12:38.123903665Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021931" Apr 13 20:12:38.125445 containerd[1502]: time="2026-04-13T20:12:38.125240196Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:38.127952 containerd[1502]: time="2026-04-13T20:12:38.127911849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:38.129140 containerd[1502]: time="2026-04-13T20:12:38.128833479Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 1.158601775s" Apr 13 20:12:38.129140 containerd[1502]: time="2026-04-13T20:12:38.128860369Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 20:12:38.129549 containerd[1502]: time="2026-04-13T20:12:38.129527890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 20:12:39.197448 containerd[1502]: time="2026-04-13T20:12:39.197391799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:39.198406 containerd[1502]: time="2026-04-13T20:12:39.198225610Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162775" Apr 13 20:12:39.199224 containerd[1502]: time="2026-04-13T20:12:39.199193741Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:39.202177 containerd[1502]: time="2026-04-13T20:12:39.201189383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:39.202177 containerd[1502]: time="2026-04-13T20:12:39.201915843Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 1.072366213s" Apr 13 20:12:39.202177 containerd[1502]: time="2026-04-13T20:12:39.201936353Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 20:12:39.202419 containerd[1502]: time="2026-04-13T20:12:39.202406834Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 20:12:39.978546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:12:39.984262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:40.127865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:40.131891 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:12:40.163777 kubelet[1932]: E0413 20:12:40.163427 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:12:40.169229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:12:40.169400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:12:40.311583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount44702293.mount: Deactivated successfully. Apr 13 20:12:40.672482 containerd[1502]: time="2026-04-13T20:12:40.672369118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:40.673444 containerd[1502]: time="2026-04-13T20:12:40.673312889Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828791" Apr 13 20:12:40.674778 containerd[1502]: time="2026-04-13T20:12:40.674211080Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:40.675982 containerd[1502]: time="2026-04-13T20:12:40.675956261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:40.676443 containerd[1502]: time="2026-04-13T20:12:40.676415102Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 1.473901798s" Apr 13 20:12:40.676495 containerd[1502]: time="2026-04-13T20:12:40.676484752Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 20:12:40.676898 containerd[1502]: time="2026-04-13T20:12:40.676883282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 20:12:41.225792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776350712.mount: Deactivated successfully. Apr 13 20:12:42.020311 containerd[1502]: time="2026-04-13T20:12:42.020241691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:42.021586 containerd[1502]: time="2026-04-13T20:12:42.021407902Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Apr 13 20:12:42.023964 containerd[1502]: time="2026-04-13T20:12:42.022515763Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:42.025772 containerd[1502]: time="2026-04-13T20:12:42.025005595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:42.025772 containerd[1502]: time="2026-04-13T20:12:42.025611576Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.348624714s" Apr 13 20:12:42.025772 containerd[1502]: time="2026-04-13T20:12:42.025652156Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 20:12:42.026274 containerd[1502]: time="2026-04-13T20:12:42.026260576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 20:12:42.502943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983531879.mount: Deactivated successfully. Apr 13 20:12:42.512043 containerd[1502]: time="2026-04-13T20:12:42.511969821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:42.513300 containerd[1502]: time="2026-04-13T20:12:42.513202922Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 13 20:12:42.518807 containerd[1502]: time="2026-04-13T20:12:42.517034725Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:42.521106 containerd[1502]: time="2026-04-13T20:12:42.521057148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:42.522302 containerd[1502]: time="2026-04-13T20:12:42.522249779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.923283ms" Apr 13 20:12:42.522571 containerd[1502]: time="2026-04-13T20:12:42.522301719Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 20:12:42.523005 containerd[1502]: time="2026-04-13T20:12:42.522933350Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 20:12:43.067737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232755673.mount: Deactivated successfully. Apr 13 20:12:43.829733 containerd[1502]: time="2026-04-13T20:12:43.829650808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:43.830962 containerd[1502]: time="2026-04-13T20:12:43.830763739Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718940" Apr 13 20:12:43.831769 containerd[1502]: time="2026-04-13T20:12:43.831688300Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:43.833919 containerd[1502]: time="2026-04-13T20:12:43.833894422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:12:43.834918 containerd[1502]: time="2026-04-13T20:12:43.834780683Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.311804073s" Apr 13 20:12:43.834918 containerd[1502]: time="2026-04-13T20:12:43.834806743Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 20:12:46.256487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:46.264106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:46.286898 systemd[1]: Reloading requested from client PID 2094 ('systemctl') (unit session-7.scope)... Apr 13 20:12:46.286930 systemd[1]: Reloading... Apr 13 20:12:46.412771 zram_generator::config[2153]: No configuration found. Apr 13 20:12:46.479541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:12:46.540300 systemd[1]: Reloading finished in 252 ms. Apr 13 20:12:46.591342 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:46.594915 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:12:46.595190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:46.599962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:46.747342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:46.752074 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:12:46.780919 kubelet[2189]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:12:46.781771 kubelet[2189]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:12:46.781771 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:12:46.781771 kubelet[2189]: I0413 20:12:46.781283 2189 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:12:47.085686 kubelet[2189]: I0413 20:12:47.085638 2189 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:12:47.085686 kubelet[2189]: I0413 20:12:47.085662 2189 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:12:47.085882 kubelet[2189]: I0413 20:12:47.085864 2189 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:12:47.111389 kubelet[2189]: I0413 20:12:47.111214 2189 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:12:47.113270 kubelet[2189]: E0413 20:12:47.113027 2189 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://204.168.241.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 204.168.241.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:12:47.120907 kubelet[2189]: E0413 20:12:47.120866 2189 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:12:47.120907 kubelet[2189]: I0413 20:12:47.120890 2189 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:12:47.123650 kubelet[2189]: I0413 20:12:47.123621 2189 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:12:47.124412 kubelet[2189]: I0413 20:12:47.124364 2189 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:12:47.124556 kubelet[2189]: I0413 20:12:47.124391 2189 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-c-b0ece174b2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:12:47.124556 kubelet[2189]: I0413 20:12:47.124539 2189 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:12:47.124556 kubelet[2189]: I0413 20:12:47.124546 2189 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:12:47.124808 kubelet[2189]: I0413 20:12:47.124654 2189 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:12:47.128491 kubelet[2189]: I0413 20:12:47.128456 2189 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:12:47.128491 kubelet[2189]: I0413 20:12:47.128470 2189 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:12:47.128491 kubelet[2189]: I0413 20:12:47.128491 2189 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:12:47.130517 kubelet[2189]: I0413 20:12:47.129919 2189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:12:47.139049 kubelet[2189]: E0413 20:12:47.138996 2189 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://204.168.241.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-c-b0ece174b2&limit=500&resourceVersion=0\": dial tcp 204.168.241.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:12:47.139154 kubelet[2189]: I0413 20:12:47.139131 2189 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:12:47.140026 kubelet[2189]: I0413 20:12:47.140002 2189 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:12:47.141494 kubelet[2189]: W0413 20:12:47.141465 2189 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:12:47.142115 kubelet[2189]: E0413 20:12:47.142055 2189 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.241.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.241.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:12:47.148066 kubelet[2189]: I0413 20:12:47.148041 2189 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:12:47.148362 kubelet[2189]: I0413 20:12:47.148108 2189 server.go:1289] "Started kubelet" Apr 13 20:12:47.150296 kubelet[2189]: I0413 20:12:47.150146 2189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:12:47.151799 kubelet[2189]: E0413 20:12:47.150213 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://204.168.241.7:6443/api/v1/namespaces/default/events\": dial tcp 204.168.241.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-c-b0ece174b2.18a603b23f76ffa1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-c-b0ece174b2,UID:ci-4081-3-7-c-b0ece174b2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-c-b0ece174b2,},FirstTimestamp:2026-04-13 20:12:47.148064673 +0000 UTC m=+0.391435097,LastTimestamp:2026-04-13 20:12:47.148064673 +0000 UTC m=+0.391435097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-c-b0ece174b2,}" Apr 13 20:12:47.153334 kubelet[2189]: I0413 20:12:47.153311 2189 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:12:47.154181 kubelet[2189]: I0413 20:12:47.154171 2189 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:12:47.156930 kubelet[2189]: I0413 20:12:47.156896 2189 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:12:47.157125 kubelet[2189]: I0413 20:12:47.157115 2189 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:12:47.157294 kubelet[2189]: I0413 20:12:47.157284 2189 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:12:47.158022 kubelet[2189]: I0413 20:12:47.158013 2189 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:12:47.158160 kubelet[2189]: I0413 20:12:47.158136 2189 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:12:47.158223 kubelet[2189]: I0413 20:12:47.158217 2189 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:12:47.158552 kubelet[2189]: E0413 20:12:47.158539 2189 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.241.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.241.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:12:47.158950 kubelet[2189]: I0413 20:12:47.158940 2189 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:12:47.159081 kubelet[2189]: I0413 20:12:47.159071 2189 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:12:47.160879 kubelet[2189]: E0413 20:12:47.160866 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" Apr 13 20:12:47.161244 kubelet[2189]: E0413 20:12:47.161021 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.241.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-c-b0ece174b2?timeout=10s\": dial tcp 204.168.241.7:6443: connect: connection refused" interval="200ms" Apr 13 20:12:47.161244 kubelet[2189]: E0413 20:12:47.161142 2189 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:12:47.161573 kubelet[2189]: I0413 20:12:47.161564 2189 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:12:47.184244 kubelet[2189]: I0413 20:12:47.183851 2189 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:12:47.184244 kubelet[2189]: I0413 20:12:47.183864 2189 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:12:47.184244 kubelet[2189]: I0413 20:12:47.183897 2189 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:12:47.187107 kubelet[2189]: I0413 20:12:47.187087 2189 policy_none.go:49] "None policy: Start" Apr 13 20:12:47.187107 kubelet[2189]: I0413 20:12:47.187106 2189 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:12:47.187179 kubelet[2189]: I0413 20:12:47.187115 2189 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:12:47.187716 kubelet[2189]: I0413 20:12:47.187701 2189 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:12:47.189284 kubelet[2189]: I0413 20:12:47.189274 2189 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:12:47.189342 kubelet[2189]: I0413 20:12:47.189336 2189 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:12:47.189382 kubelet[2189]: I0413 20:12:47.189375 2189 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:12:47.189412 kubelet[2189]: I0413 20:12:47.189406 2189 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:12:47.189464 kubelet[2189]: E0413 20:12:47.189454 2189 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:12:47.193063 kubelet[2189]: E0413 20:12:47.192902 2189 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://204.168.241.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 204.168.241.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:12:47.196141 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:12:47.204952 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:12:47.207583 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:12:47.214554 kubelet[2189]: E0413 20:12:47.214538 2189 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:12:47.215041 kubelet[2189]: I0413 20:12:47.214829 2189 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:12:47.215041 kubelet[2189]: I0413 20:12:47.214841 2189 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:12:47.216080 kubelet[2189]: I0413 20:12:47.215408 2189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:12:47.217027 kubelet[2189]: E0413 20:12:47.217008 2189 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:12:47.217075 kubelet[2189]: E0413 20:12:47.217037 2189 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-c-b0ece174b2\" not found" Apr 13 20:12:47.308997 systemd[1]: Created slice kubepods-burstable-pod3390f7752f26c3f07520bdf31500b229.slice - libcontainer container kubepods-burstable-pod3390f7752f26c3f07520bdf31500b229.slice. Apr 13 20:12:47.316927 kubelet[2189]: I0413 20:12:47.316890 2189 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.317219 kubelet[2189]: E0413 20:12:47.317194 2189 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.241.7:6443/api/v1/nodes\": dial tcp 204.168.241.7:6443: connect: connection refused" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.327313 kubelet[2189]: E0413 20:12:47.327296 2189 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.330058 systemd[1]: Created slice kubepods-burstable-podc8e38ed438eac67936d1708c6ec51445.slice - libcontainer container kubepods-burstable-podc8e38ed438eac67936d1708c6ec51445.slice. Apr 13 20:12:47.340864 kubelet[2189]: E0413 20:12:47.340784 2189 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.343108 systemd[1]: Created slice kubepods-burstable-podad138215af2cccbf6979152c885b529e.slice - libcontainer container kubepods-burstable-podad138215af2cccbf6979152c885b529e.slice. Apr 13 20:12:47.344867 kubelet[2189]: E0413 20:12:47.344846 2189 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.362389 kubelet[2189]: E0413 20:12:47.362352 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.241.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-c-b0ece174b2?timeout=10s\": dial tcp 204.168.241.7:6443: connect: connection refused" interval="400ms" Apr 13 20:12:47.460031 kubelet[2189]: I0413 20:12:47.459918 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3390f7752f26c3f07520bdf31500b229-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-c-b0ece174b2\" (UID: \"3390f7752f26c3f07520bdf31500b229\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.460031 kubelet[2189]: I0413 20:12:47.459976 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.460281 kubelet[2189]: I0413 20:12:47.460055 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad138215af2cccbf6979152c885b529e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-c-b0ece174b2\" (UID: \"ad138215af2cccbf6979152c885b529e\") " pod="kube-system/kube-scheduler-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.460281 kubelet[2189]: I0413 20:12:47.460124 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3390f7752f26c3f07520bdf31500b229-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-c-b0ece174b2\" (UID: \"3390f7752f26c3f07520bdf31500b229\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.460281 kubelet[2189]: I0413 20:12:47.460154 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3390f7752f26c3f07520bdf31500b229-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-c-b0ece174b2\" (UID: \"3390f7752f26c3f07520bdf31500b229\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.460281 kubelet[2189]: I0413 20:12:47.460181 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.460281 kubelet[2189]: I0413 20:12:47.460216 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.460483 kubelet[2189]: I0413 20:12:47.460239 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.460483 kubelet[2189]: I0413 20:12:47.460263 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.520328 kubelet[2189]: I0413 20:12:47.520265 2189 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.520853 kubelet[2189]: E0413 20:12:47.520784 2189 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.241.7:6443/api/v1/nodes\": dial tcp 204.168.241.7:6443: connect: connection refused" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.628972 containerd[1502]: time="2026-04-13T20:12:47.628810443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-c-b0ece174b2,Uid:3390f7752f26c3f07520bdf31500b229,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:47.642047 containerd[1502]: time="2026-04-13T20:12:47.641945964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-c-b0ece174b2,Uid:c8e38ed438eac67936d1708c6ec51445,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:47.645594 containerd[1502]: time="2026-04-13T20:12:47.645561867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-c-b0ece174b2,Uid:ad138215af2cccbf6979152c885b529e,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:47.763112 kubelet[2189]: E0413 20:12:47.763071 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://204.168.241.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-c-b0ece174b2?timeout=10s\": dial tcp 204.168.241.7:6443: connect: connection refused" interval="800ms" Apr 13 20:12:47.923410 kubelet[2189]: I0413 20:12:47.923276 2189 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:47.924363 kubelet[2189]: E0413 20:12:47.923677 2189 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://204.168.241.7:6443/api/v1/nodes\": dial tcp 204.168.241.7:6443: connect: connection refused" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:48.098599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4080702570.mount: Deactivated successfully. Apr 13 20:12:48.109720 containerd[1502]: time="2026-04-13T20:12:48.109603264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:48.111117 containerd[1502]: time="2026-04-13T20:12:48.111042225Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:48.112408 containerd[1502]: time="2026-04-13T20:12:48.112334056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:12:48.113320 containerd[1502]: time="2026-04-13T20:12:48.113249827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 13 20:12:48.114262 containerd[1502]: time="2026-04-13T20:12:48.114170148Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:48.116614 containerd[1502]: time="2026-04-13T20:12:48.116339530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:48.116614 containerd[1502]: time="2026-04-13T20:12:48.116485860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:12:48.120942 containerd[1502]: time="2026-04-13T20:12:48.120899133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:12:48.124832 containerd[1502]: time="2026-04-13T20:12:48.124775727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.871334ms" Apr 13 20:12:48.126283 containerd[1502]: time="2026-04-13T20:12:48.126199038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.584071ms" Apr 13 20:12:48.128608 containerd[1502]: time="2026-04-13T20:12:48.128523210Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.524036ms" Apr 13 20:12:48.186905 kubelet[2189]: E0413 20:12:48.186043 2189 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://204.168.241.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-c-b0ece174b2&limit=500&resourceVersion=0\": dial tcp 204.168.241.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:12:48.227833 kubelet[2189]: E0413 20:12:48.227718 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://204.168.241.7:6443/api/v1/namespaces/default/events\": dial tcp 204.168.241.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-c-b0ece174b2.18a603b23f76ffa1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-c-b0ece174b2,UID:ci-4081-3-7-c-b0ece174b2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-c-b0ece174b2,},FirstTimestamp:2026-04-13 20:12:47.148064673 +0000 UTC m=+0.391435097,LastTimestamp:2026-04-13 20:12:47.148064673 +0000 UTC m=+0.391435097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-c-b0ece174b2,}" Apr 13 20:12:48.252344 containerd[1502]: time="2026-04-13T20:12:48.252103293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:48.252344 containerd[1502]: time="2026-04-13T20:12:48.252158033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:48.252344 containerd[1502]: time="2026-04-13T20:12:48.252167803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:48.252344 containerd[1502]: time="2026-04-13T20:12:48.252226963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:48.254019 containerd[1502]: time="2026-04-13T20:12:48.253013443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:48.254019 containerd[1502]: time="2026-04-13T20:12:48.253076883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:48.254019 containerd[1502]: time="2026-04-13T20:12:48.253087093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:48.254019 containerd[1502]: time="2026-04-13T20:12:48.253159764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:48.255944 containerd[1502]: time="2026-04-13T20:12:48.255622706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:48.255944 containerd[1502]: time="2026-04-13T20:12:48.255662216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:48.255944 containerd[1502]: time="2026-04-13T20:12:48.255670446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:48.255944 containerd[1502]: time="2026-04-13T20:12:48.255753616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:48.274904 systemd[1]: Started cri-containerd-23e8b35d19b881b5d4103d196d138ff9cf8260e57bd8278dc638ed92230d51e6.scope - libcontainer container 23e8b35d19b881b5d4103d196d138ff9cf8260e57bd8278dc638ed92230d51e6. Apr 13 20:12:48.287873 systemd[1]: Started cri-containerd-b877f39c38953b98ece0b0b644acb6829a1cd011a2ab7142df72fdf469e4898c.scope - libcontainer container b877f39c38953b98ece0b0b644acb6829a1cd011a2ab7142df72fdf469e4898c. Apr 13 20:12:48.294977 systemd[1]: Started cri-containerd-841c82ef5d9120783d621631c48583704df8dcb43cc4f8a3b7f5ef4da0fd9e2e.scope - libcontainer container 841c82ef5d9120783d621631c48583704df8dcb43cc4f8a3b7f5ef4da0fd9e2e. Apr 13 20:12:48.314671 kubelet[2189]: E0413 20:12:48.313959 2189 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://204.168.241.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 204.168.241.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:12:48.325775 containerd[1502]: time="2026-04-13T20:12:48.325203834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-c-b0ece174b2,Uid:3390f7752f26c3f07520bdf31500b229,Namespace:kube-system,Attempt:0,} returns sandbox id \"23e8b35d19b881b5d4103d196d138ff9cf8260e57bd8278dc638ed92230d51e6\"" Apr 13 20:12:48.334023 containerd[1502]: time="2026-04-13T20:12:48.333940991Z" level=info msg="CreateContainer within sandbox \"23e8b35d19b881b5d4103d196d138ff9cf8260e57bd8278dc638ed92230d51e6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:12:48.351337 containerd[1502]: time="2026-04-13T20:12:48.350858995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-c-b0ece174b2,Uid:ad138215af2cccbf6979152c885b529e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b877f39c38953b98ece0b0b644acb6829a1cd011a2ab7142df72fdf469e4898c\"" Apr 13 20:12:48.351438 containerd[1502]: time="2026-04-13T20:12:48.351357655Z" level=info msg="CreateContainer within sandbox \"23e8b35d19b881b5d4103d196d138ff9cf8260e57bd8278dc638ed92230d51e6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a02386bd5abddd1f4bef0899c97cf027c9bc2490099db009c95015877ca572a\"" Apr 13 20:12:48.354413 containerd[1502]: time="2026-04-13T20:12:48.353833967Z" level=info msg="StartContainer for \"6a02386bd5abddd1f4bef0899c97cf027c9bc2490099db009c95015877ca572a\"" Apr 13 20:12:48.354504 containerd[1502]: time="2026-04-13T20:12:48.354460168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-c-b0ece174b2,Uid:c8e38ed438eac67936d1708c6ec51445,Namespace:kube-system,Attempt:0,} returns sandbox id \"841c82ef5d9120783d621631c48583704df8dcb43cc4f8a3b7f5ef4da0fd9e2e\"" Apr 13 20:12:48.355829 containerd[1502]: time="2026-04-13T20:12:48.355806179Z" level=info msg="CreateContainer within sandbox \"b877f39c38953b98ece0b0b644acb6829a1cd011a2ab7142df72fdf469e4898c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:12:48.358431 containerd[1502]: time="2026-04-13T20:12:48.358279231Z" level=info msg="CreateContainer within sandbox \"841c82ef5d9120783d621631c48583704df8dcb43cc4f8a3b7f5ef4da0fd9e2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:12:48.370802 containerd[1502]: time="2026-04-13T20:12:48.370766352Z" level=info msg="CreateContainer within sandbox \"b877f39c38953b98ece0b0b644acb6829a1cd011a2ab7142df72fdf469e4898c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e459607ef2e5db89d4072f4018be326a1ba036817f2be6f2cdb856ce89a6fa35\"" Apr 13 20:12:48.371671 containerd[1502]: time="2026-04-13T20:12:48.371635172Z" level=info msg="StartContainer for \"e459607ef2e5db89d4072f4018be326a1ba036817f2be6f2cdb856ce89a6fa35\"" Apr 13 20:12:48.377716 containerd[1502]: time="2026-04-13T20:12:48.377663467Z" level=info msg="CreateContainer within sandbox \"841c82ef5d9120783d621631c48583704df8dcb43cc4f8a3b7f5ef4da0fd9e2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551\"" Apr 13 20:12:48.379815 containerd[1502]: time="2026-04-13T20:12:48.379776429Z" level=info msg="StartContainer for \"4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551\"" Apr 13 20:12:48.385596 systemd[1]: Started cri-containerd-6a02386bd5abddd1f4bef0899c97cf027c9bc2490099db009c95015877ca572a.scope - libcontainer container 6a02386bd5abddd1f4bef0899c97cf027c9bc2490099db009c95015877ca572a. Apr 13 20:12:48.409912 systemd[1]: Started cri-containerd-e459607ef2e5db89d4072f4018be326a1ba036817f2be6f2cdb856ce89a6fa35.scope - libcontainer container e459607ef2e5db89d4072f4018be326a1ba036817f2be6f2cdb856ce89a6fa35. Apr 13 20:12:48.413584 systemd[1]: Started cri-containerd-4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551.scope - libcontainer container 4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551. Apr 13 20:12:48.438980 containerd[1502]: time="2026-04-13T20:12:48.438894188Z" level=info msg="StartContainer for \"6a02386bd5abddd1f4bef0899c97cf027c9bc2490099db009c95015877ca572a\" returns successfully" Apr 13 20:12:48.464982 containerd[1502]: time="2026-04-13T20:12:48.464940020Z" level=info msg="StartContainer for \"e459607ef2e5db89d4072f4018be326a1ba036817f2be6f2cdb856ce89a6fa35\" returns successfully" Apr 13 20:12:48.480815 containerd[1502]: time="2026-04-13T20:12:48.480033973Z" level=info msg="StartContainer for \"4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551\" returns successfully" Apr 13 20:12:48.490554 kubelet[2189]: E0413 20:12:48.490505 2189 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://204.168.241.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 204.168.241.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:12:48.727042 kubelet[2189]: I0413 20:12:48.726711 2189 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:49.199783 kubelet[2189]: E0413 20:12:49.199202 2189 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:49.200109 kubelet[2189]: E0413 20:12:49.199902 2189 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:49.203355 kubelet[2189]: E0413 20:12:49.203332 2189 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:49.828111 kubelet[2189]: E0413 20:12:49.828050 2189 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-c-b0ece174b2\" not found" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:50.008248 kubelet[2189]: I0413 20:12:50.008199 2189 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:50.008248 kubelet[2189]: E0413 20:12:50.008237 2189 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-c-b0ece174b2\": node \"ci-4081-3-7-c-b0ece174b2\" not found" Apr 13 20:12:50.021438 kubelet[2189]: E0413 20:12:50.021390 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" Apr 13 20:12:50.122376 kubelet[2189]: E0413 20:12:50.122028 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" Apr 13 20:12:50.208290 kubelet[2189]: E0413 20:12:50.207801 2189 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:50.208290 kubelet[2189]: E0413 20:12:50.208055 2189 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:50.222654 kubelet[2189]: E0413 20:12:50.222595 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" Apr 13 20:12:50.323573 kubelet[2189]: E0413 20:12:50.323505 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" Apr 13 20:12:50.424022 kubelet[2189]: E0413 20:12:50.423825 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" Apr 13 20:12:50.525030 kubelet[2189]: E0413 20:12:50.524951 2189 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-c-b0ece174b2\" not found" Apr 13 20:12:50.561756 kubelet[2189]: I0413 20:12:50.561682 2189 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:50.568408 kubelet[2189]: E0413 20:12:50.568340 2189 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:50.568408 kubelet[2189]: I0413 20:12:50.568358 2189 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:50.569225 kubelet[2189]: E0413 20:12:50.569188 2189 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-c-b0ece174b2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:50.569225 kubelet[2189]: I0413 20:12:50.569221 2189 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:50.570142 kubelet[2189]: E0413 20:12:50.570111 2189 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-c-b0ece174b2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:51.143518 kubelet[2189]: I0413 20:12:51.143368 2189 apiserver.go:52] "Watching apiserver" Apr 13 20:12:51.159037 kubelet[2189]: I0413 20:12:51.158988 2189 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:12:51.434833 kubelet[2189]: I0413 20:12:51.434673 2189 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:51.723848 systemd[1]: Reloading requested from client PID 2473 ('systemctl') (unit session-7.scope)... Apr 13 20:12:51.723876 systemd[1]: Reloading... Apr 13 20:12:51.839785 zram_generator::config[2525]: No configuration found. Apr 13 20:12:51.910085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:12:51.981692 systemd[1]: Reloading finished in 257 ms. Apr 13 20:12:52.026627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:52.039021 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:12:52.039252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:52.043990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:12:52.160405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:12:52.170086 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:12:52.209062 kubelet[2564]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:12:52.209062 kubelet[2564]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:12:52.209062 kubelet[2564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:12:52.209062 kubelet[2564]: I0413 20:12:52.208950 2564 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:12:52.216511 kubelet[2564]: I0413 20:12:52.216395 2564 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:12:52.216511 kubelet[2564]: I0413 20:12:52.216417 2564 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:12:52.218366 kubelet[2564]: I0413 20:12:52.218322 2564 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:12:52.219550 kubelet[2564]: I0413 20:12:52.219521 2564 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:12:52.221073 kubelet[2564]: I0413 20:12:52.220956 2564 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:12:52.224874 kubelet[2564]: E0413 20:12:52.224815 2564 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:12:52.225051 kubelet[2564]: I0413 20:12:52.224835 2564 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:12:52.228333 kubelet[2564]: I0413 20:12:52.228322 2564 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:12:52.228717 kubelet[2564]: I0413 20:12:52.228690 2564 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:12:52.228913 kubelet[2564]: I0413 20:12:52.228783 2564 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-c-b0ece174b2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:12:52.229092 kubelet[2564]: I0413 20:12:52.229016 2564 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:12:52.229092 kubelet[2564]: I0413 20:12:52.229024 2564 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:12:52.229092 kubelet[2564]: I0413 20:12:52.229067 2564 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:12:52.229332 kubelet[2564]: I0413 20:12:52.229272 2564 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:12:52.229332 kubelet[2564]: I0413 20:12:52.229285 2564 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:12:52.229332 kubelet[2564]: I0413 20:12:52.229304 2564 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:12:52.229887 kubelet[2564]: I0413 20:12:52.229842 2564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:12:52.232601 kubelet[2564]: I0413 20:12:52.232394 2564 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:12:52.233553 kubelet[2564]: I0413 20:12:52.233498 2564 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:12:52.236472 kubelet[2564]: I0413 20:12:52.236462 2564 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:12:52.236556 kubelet[2564]: I0413 20:12:52.236550 2564 server.go:1289] "Started kubelet" Apr 13 20:12:52.242655 kubelet[2564]: I0413 20:12:52.241552 2564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:12:52.242655 kubelet[2564]: I0413 20:12:52.242121 2564 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:12:52.243829 kubelet[2564]: I0413 20:12:52.243817 2564 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:12:52.247363 kubelet[2564]: I0413 20:12:52.247350 2564 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:12:52.255521 kubelet[2564]: I0413 20:12:52.247846 2564 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:12:52.255868 kubelet[2564]: I0413 20:12:52.247869 2564 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:12:52.255868 kubelet[2564]: I0413 20:12:52.247878 2564 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:12:52.256017 kubelet[2564]: I0413 20:12:52.255995 2564 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:12:52.256289 kubelet[2564]: I0413 20:12:52.256109 2564 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:12:52.256495 kubelet[2564]: I0413 20:12:52.256474 2564 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:12:52.256559 kubelet[2564]: I0413 20:12:52.256537 2564 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:12:52.258198 kubelet[2564]: E0413 20:12:52.258103 2564 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:12:52.258357 kubelet[2564]: I0413 20:12:52.258340 2564 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:12:52.259599 kubelet[2564]: I0413 20:12:52.259050 2564 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:12:52.259713 kubelet[2564]: I0413 20:12:52.259688 2564 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:12:52.259781 kubelet[2564]: I0413 20:12:52.259774 2564 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:12:52.259856 kubelet[2564]: I0413 20:12:52.259846 2564 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:12:52.259884 kubelet[2564]: I0413 20:12:52.259878 2564 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:12:52.260012 kubelet[2564]: E0413 20:12:52.259968 2564 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:12:52.301152 kubelet[2564]: I0413 20:12:52.301121 2564 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:12:52.301326 kubelet[2564]: I0413 20:12:52.301316 2564 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:12:52.301383 kubelet[2564]: I0413 20:12:52.301376 2564 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:12:52.301772 kubelet[2564]: I0413 20:12:52.301757 2564 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:12:52.301853 kubelet[2564]: I0413 20:12:52.301836 2564 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:12:52.301885 kubelet[2564]: I0413 20:12:52.301880 2564 policy_none.go:49] "None policy: Start" Apr 13 20:12:52.301923 kubelet[2564]: I0413 20:12:52.301917 2564 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:12:52.301957 kubelet[2564]: I0413 20:12:52.301952 2564 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:12:52.302084 kubelet[2564]: I0413 20:12:52.302076 2564 state_mem.go:75] "Updated machine memory state" Apr 13 20:12:52.308939 kubelet[2564]: E0413 20:12:52.308924 2564 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:12:52.309503 kubelet[2564]: I0413 20:12:52.309493 2564 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:12:52.309576 kubelet[2564]: I0413 20:12:52.309556 2564 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:12:52.309899 kubelet[2564]: I0413 20:12:52.309861 2564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:12:52.312795 kubelet[2564]: E0413 20:12:52.311947 2564 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:12:52.361331 kubelet[2564]: I0413 20:12:52.361269 2564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.361807 kubelet[2564]: I0413 20:12:52.361641 2564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.362126 kubelet[2564]: I0413 20:12:52.362072 2564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.369377 kubelet[2564]: E0413 20:12:52.369318 2564 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.419458 kubelet[2564]: I0413 20:12:52.419411 2564 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.431876 kubelet[2564]: I0413 20:12:52.431799 2564 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.432113 kubelet[2564]: I0413 20:12:52.431921 2564 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.557484 kubelet[2564]: I0413 20:12:52.557306 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.557484 kubelet[2564]: I0413 20:12:52.557360 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.557484 kubelet[2564]: I0413 20:12:52.557397 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.557484 kubelet[2564]: I0413 20:12:52.557424 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad138215af2cccbf6979152c885b529e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-c-b0ece174b2\" (UID: \"ad138215af2cccbf6979152c885b529e\") " pod="kube-system/kube-scheduler-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.557484 kubelet[2564]: I0413 20:12:52.557449 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3390f7752f26c3f07520bdf31500b229-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-c-b0ece174b2\" (UID: \"3390f7752f26c3f07520bdf31500b229\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.557855 kubelet[2564]: I0413 20:12:52.557487 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.557855 kubelet[2564]: I0413 20:12:52.557511 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8e38ed438eac67936d1708c6ec51445-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-c-b0ece174b2\" (UID: \"c8e38ed438eac67936d1708c6ec51445\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.557855 kubelet[2564]: I0413 20:12:52.557535 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3390f7752f26c3f07520bdf31500b229-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-c-b0ece174b2\" (UID: \"3390f7752f26c3f07520bdf31500b229\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:52.557855 kubelet[2564]: I0413 20:12:52.557559 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3390f7752f26c3f07520bdf31500b229-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-c-b0ece174b2\" (UID: \"3390f7752f26c3f07520bdf31500b229\") " pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:53.231068 kubelet[2564]: I0413 20:12:53.230660 2564 apiserver.go:52] "Watching apiserver" Apr 13 20:12:53.256903 kubelet[2564]: I0413 20:12:53.256835 2564 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:12:53.286331 kubelet[2564]: I0413 20:12:53.286305 2564 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:53.302997 kubelet[2564]: E0413 20:12:53.302959 2564 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-c-b0ece174b2\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b0ece174b2" Apr 13 20:12:53.347352 kubelet[2564]: I0413 20:12:53.347266 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-c-b0ece174b2" podStartSLOduration=1.347232127 podStartE2EDuration="1.347232127s" podCreationTimestamp="2026-04-13 20:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:12:53.333635396 +0000 UTC m=+1.153052092" watchObservedRunningTime="2026-04-13 20:12:53.347232127 +0000 UTC m=+1.166648823" Apr 13 20:12:53.347652 kubelet[2564]: I0413 20:12:53.347454 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-c-b0ece174b2" podStartSLOduration=2.347449807 podStartE2EDuration="2.347449807s" podCreationTimestamp="2026-04-13 20:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:12:53.346989297 +0000 UTC m=+1.166405993" watchObservedRunningTime="2026-04-13 20:12:53.347449807 +0000 UTC m=+1.166866513" Apr 13 20:12:53.369989 kubelet[2564]: I0413 20:12:53.369872 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-c-b0ece174b2" podStartSLOduration=1.369858466 podStartE2EDuration="1.369858466s" podCreationTimestamp="2026-04-13 20:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:12:53.356802035 +0000 UTC m=+1.176218741" watchObservedRunningTime="2026-04-13 20:12:53.369858466 +0000 UTC m=+1.189275162" Apr 13 20:12:58.371170 kubelet[2564]: I0413 20:12:58.370917 2564 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:12:58.371885 kubelet[2564]: I0413 20:12:58.371668 2564 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:12:58.371973 containerd[1502]: time="2026-04-13T20:12:58.371423283Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:12:59.082857 systemd[1]: Created slice kubepods-besteffort-pod3887f95f_05c5_468e_bf3b_29a2d8050049.slice - libcontainer container kubepods-besteffort-pod3887f95f_05c5_468e_bf3b_29a2d8050049.slice. Apr 13 20:12:59.097709 kubelet[2564]: I0413 20:12:59.097676 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3887f95f-05c5-468e-bf3b-29a2d8050049-kube-proxy\") pod \"kube-proxy-sdhdb\" (UID: \"3887f95f-05c5-468e-bf3b-29a2d8050049\") " pod="kube-system/kube-proxy-sdhdb" Apr 13 20:12:59.097996 kubelet[2564]: I0413 20:12:59.097925 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3887f95f-05c5-468e-bf3b-29a2d8050049-xtables-lock\") pod \"kube-proxy-sdhdb\" (UID: \"3887f95f-05c5-468e-bf3b-29a2d8050049\") " pod="kube-system/kube-proxy-sdhdb" Apr 13 20:12:59.097996 kubelet[2564]: I0413 20:12:59.097949 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3887f95f-05c5-468e-bf3b-29a2d8050049-lib-modules\") pod \"kube-proxy-sdhdb\" (UID: \"3887f95f-05c5-468e-bf3b-29a2d8050049\") " pod="kube-system/kube-proxy-sdhdb" Apr 13 20:12:59.098261 kubelet[2564]: I0413 20:12:59.098143 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hwd\" (UniqueName: \"kubernetes.io/projected/3887f95f-05c5-468e-bf3b-29a2d8050049-kube-api-access-f9hwd\") pod \"kube-proxy-sdhdb\" (UID: \"3887f95f-05c5-468e-bf3b-29a2d8050049\") " pod="kube-system/kube-proxy-sdhdb" Apr 13 20:12:59.190522 systemd[1]: Created slice kubepods-besteffort-pod447167ed_a9fd_444e_b27b_1e1623f9fc77.slice - libcontainer container kubepods-besteffort-pod447167ed_a9fd_444e_b27b_1e1623f9fc77.slice. Apr 13 20:12:59.198525 kubelet[2564]: I0413 20:12:59.198498 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/447167ed-a9fd-444e-b27b-1e1623f9fc77-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-bjc8s\" (UID: \"447167ed-a9fd-444e-b27b-1e1623f9fc77\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bjc8s" Apr 13 20:12:59.198525 kubelet[2564]: I0413 20:12:59.198525 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92gw6\" (UniqueName: \"kubernetes.io/projected/447167ed-a9fd-444e-b27b-1e1623f9fc77-kube-api-access-92gw6\") pod \"tigera-operator-6bf85f8dd-bjc8s\" (UID: \"447167ed-a9fd-444e-b27b-1e1623f9fc77\") " pod="tigera-operator/tigera-operator-6bf85f8dd-bjc8s" Apr 13 20:12:59.393305 containerd[1502]: time="2026-04-13T20:12:59.393102004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdhdb,Uid:3887f95f-05c5-468e-bf3b-29a2d8050049,Namespace:kube-system,Attempt:0,}" Apr 13 20:12:59.442389 containerd[1502]: time="2026-04-13T20:12:59.442258035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:59.443675 containerd[1502]: time="2026-04-13T20:12:59.443441046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:59.444105 containerd[1502]: time="2026-04-13T20:12:59.443507616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:59.444866 containerd[1502]: time="2026-04-13T20:12:59.444447047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:59.487883 systemd[1]: Started cri-containerd-28ba5b3288f8ce9bad55062156b1dcdce02085dc8e3b7ca6596cf369762d3868.scope - libcontainer container 28ba5b3288f8ce9bad55062156b1dcdce02085dc8e3b7ca6596cf369762d3868. Apr 13 20:12:59.495271 containerd[1502]: time="2026-04-13T20:12:59.495241809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bjc8s,Uid:447167ed-a9fd-444e-b27b-1e1623f9fc77,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:12:59.512149 containerd[1502]: time="2026-04-13T20:12:59.512110033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdhdb,Uid:3887f95f-05c5-468e-bf3b-29a2d8050049,Namespace:kube-system,Attempt:0,} returns sandbox id \"28ba5b3288f8ce9bad55062156b1dcdce02085dc8e3b7ca6596cf369762d3868\"" Apr 13 20:12:59.520846 containerd[1502]: time="2026-04-13T20:12:59.520814680Z" level=info msg="CreateContainer within sandbox \"28ba5b3288f8ce9bad55062156b1dcdce02085dc8e3b7ca6596cf369762d3868\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:12:59.544186 containerd[1502]: time="2026-04-13T20:12:59.525906025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:12:59.544186 containerd[1502]: time="2026-04-13T20:12:59.526012925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:12:59.544186 containerd[1502]: time="2026-04-13T20:12:59.526029375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:59.544186 containerd[1502]: time="2026-04-13T20:12:59.526123295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:12:59.546963 systemd[1]: Started cri-containerd-a96145f236f2e03d6ffed153f2ad94f530da629ddee49260b7498130d8d4f7a5.scope - libcontainer container a96145f236f2e03d6ffed153f2ad94f530da629ddee49260b7498130d8d4f7a5. Apr 13 20:12:59.548084 containerd[1502]: time="2026-04-13T20:12:59.548016223Z" level=info msg="CreateContainer within sandbox \"28ba5b3288f8ce9bad55062156b1dcdce02085dc8e3b7ca6596cf369762d3868\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4dc1154079833b7b8c9c0519dbc2560aa84d0ab43d2428f5ca9ecc321a153318\"" Apr 13 20:12:59.550527 containerd[1502]: time="2026-04-13T20:12:59.550148935Z" level=info msg="StartContainer for \"4dc1154079833b7b8c9c0519dbc2560aa84d0ab43d2428f5ca9ecc321a153318\"" Apr 13 20:12:59.587858 systemd[1]: Started cri-containerd-4dc1154079833b7b8c9c0519dbc2560aa84d0ab43d2428f5ca9ecc321a153318.scope - libcontainer container 4dc1154079833b7b8c9c0519dbc2560aa84d0ab43d2428f5ca9ecc321a153318. Apr 13 20:12:59.597032 containerd[1502]: time="2026-04-13T20:12:59.596964784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-bjc8s,Uid:447167ed-a9fd-444e-b27b-1e1623f9fc77,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a96145f236f2e03d6ffed153f2ad94f530da629ddee49260b7498130d8d4f7a5\"" Apr 13 20:12:59.598965 containerd[1502]: time="2026-04-13T20:12:59.598937115Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:12:59.619643 containerd[1502]: time="2026-04-13T20:12:59.619609063Z" level=info msg="StartContainer for \"4dc1154079833b7b8c9c0519dbc2560aa84d0ab43d2428f5ca9ecc321a153318\" returns successfully" Apr 13 20:13:00.215391 systemd[1]: run-containerd-runc-k8s.io-28ba5b3288f8ce9bad55062156b1dcdce02085dc8e3b7ca6596cf369762d3868-runc.K9MhEf.mount: Deactivated successfully. Apr 13 20:13:00.568237 kubelet[2564]: I0413 20:13:00.567947 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sdhdb" podStartSLOduration=1.5679259330000002 podStartE2EDuration="1.567925933s" podCreationTimestamp="2026-04-13 20:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:13:00.318652575 +0000 UTC m=+8.138069331" watchObservedRunningTime="2026-04-13 20:13:00.567925933 +0000 UTC m=+8.387342669" Apr 13 20:13:01.544703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845937750.mount: Deactivated successfully. Apr 13 20:13:02.520330 containerd[1502]: time="2026-04-13T20:13:02.520268339Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:02.521535 containerd[1502]: time="2026-04-13T20:13:02.521494490Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:13:02.522654 containerd[1502]: time="2026-04-13T20:13:02.522619921Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:02.524769 containerd[1502]: time="2026-04-13T20:13:02.524719453Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:02.525648 containerd[1502]: time="2026-04-13T20:13:02.525345203Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.926385798s" Apr 13 20:13:02.525648 containerd[1502]: time="2026-04-13T20:13:02.525385763Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:13:02.529011 containerd[1502]: time="2026-04-13T20:13:02.528976066Z" level=info msg="CreateContainer within sandbox \"a96145f236f2e03d6ffed153f2ad94f530da629ddee49260b7498130d8d4f7a5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:13:02.549354 containerd[1502]: time="2026-04-13T20:13:02.549311843Z" level=info msg="CreateContainer within sandbox \"a96145f236f2e03d6ffed153f2ad94f530da629ddee49260b7498130d8d4f7a5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01\"" Apr 13 20:13:02.549851 containerd[1502]: time="2026-04-13T20:13:02.549812564Z" level=info msg="StartContainer for \"441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01\"" Apr 13 20:13:02.574878 systemd[1]: Started cri-containerd-441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01.scope - libcontainer container 441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01. Apr 13 20:13:02.595869 containerd[1502]: time="2026-04-13T20:13:02.595821452Z" level=info msg="StartContainer for \"441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01\" returns successfully" Apr 13 20:13:04.697614 kubelet[2564]: I0413 20:13:04.697458 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-bjc8s" podStartSLOduration=2.769808934 podStartE2EDuration="5.697445533s" podCreationTimestamp="2026-04-13 20:12:59 +0000 UTC" firstStartedPulling="2026-04-13 20:12:59.598580055 +0000 UTC m=+7.417996761" lastFinishedPulling="2026-04-13 20:13:02.526216664 +0000 UTC m=+10.345633360" observedRunningTime="2026-04-13 20:13:03.327783942 +0000 UTC m=+11.147200658" watchObservedRunningTime="2026-04-13 20:13:04.697445533 +0000 UTC m=+12.516862229" Apr 13 20:13:05.013740 systemd-timesyncd[1439]: Contacted time server 217.217.243.78:123 (2.flatcar.pool.ntp.org). Apr 13 20:13:05.014010 systemd-timesyncd[1439]: Initial clock synchronization to Mon 2026-04-13 20:13:05.211357 UTC. Apr 13 20:13:07.756134 sudo[1700]: pam_unix(sudo:session): session closed for user root Apr 13 20:13:07.788966 sshd[1697]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:07.792069 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:13:07.794217 systemd[1]: sshd@6-204.168.241.7:22-20.229.252.112:46008.service: Deactivated successfully. Apr 13 20:13:07.799123 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:13:07.799287 systemd[1]: session-7.scope: Consumed 4.476s CPU time, 159.3M memory peak, 0B memory swap peak. Apr 13 20:13:07.800282 systemd-logind[1489]: Removed session 7. Apr 13 20:13:09.730551 systemd[1]: Created slice kubepods-besteffort-podf97c36f2_f8a0_407f_bed1_f26355b27d92.slice - libcontainer container kubepods-besteffort-podf97c36f2_f8a0_407f_bed1_f26355b27d92.slice. Apr 13 20:13:09.772998 kubelet[2564]: I0413 20:13:09.772884 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f97c36f2-f8a0-407f-bed1-f26355b27d92-typha-certs\") pod \"calico-typha-5946f599dd-jxs9j\" (UID: \"f97c36f2-f8a0-407f-bed1-f26355b27d92\") " pod="calico-system/calico-typha-5946f599dd-jxs9j" Apr 13 20:13:09.772998 kubelet[2564]: I0413 20:13:09.772920 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcswt\" (UniqueName: \"kubernetes.io/projected/f97c36f2-f8a0-407f-bed1-f26355b27d92-kube-api-access-zcswt\") pod \"calico-typha-5946f599dd-jxs9j\" (UID: \"f97c36f2-f8a0-407f-bed1-f26355b27d92\") " pod="calico-system/calico-typha-5946f599dd-jxs9j" Apr 13 20:13:09.772998 kubelet[2564]: I0413 20:13:09.772935 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f97c36f2-f8a0-407f-bed1-f26355b27d92-tigera-ca-bundle\") pod \"calico-typha-5946f599dd-jxs9j\" (UID: \"f97c36f2-f8a0-407f-bed1-f26355b27d92\") " pod="calico-system/calico-typha-5946f599dd-jxs9j" Apr 13 20:13:09.785828 systemd[1]: Created slice kubepods-besteffort-pod159b98f2_c9e3_47a8_8a0e_bf8c499a3794.slice - libcontainer container kubepods-besteffort-pod159b98f2_c9e3_47a8_8a0e_bf8c499a3794.slice. Apr 13 20:13:09.873514 kubelet[2564]: I0413 20:13:09.873476 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-sys-fs\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873643 kubelet[2564]: I0413 20:13:09.873539 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-cni-bin-dir\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873643 kubelet[2564]: I0413 20:13:09.873553 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-cni-log-dir\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873643 kubelet[2564]: I0413 20:13:09.873567 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-lib-modules\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873643 kubelet[2564]: I0413 20:13:09.873583 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-policysync\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873643 kubelet[2564]: I0413 20:13:09.873603 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-flexvol-driver-host\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873740 kubelet[2564]: I0413 20:13:09.873614 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-var-lib-calico\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873740 kubelet[2564]: I0413 20:13:09.873624 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-xtables-lock\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873740 kubelet[2564]: I0413 20:13:09.873644 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-cni-net-dir\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873740 kubelet[2564]: I0413 20:13:09.873654 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-node-certs\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873740 kubelet[2564]: I0413 20:13:09.873665 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-nodeproc\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873836 kubelet[2564]: I0413 20:13:09.873676 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-bpffs\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873836 kubelet[2564]: I0413 20:13:09.873686 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-var-run-calico\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873836 kubelet[2564]: I0413 20:13:09.873698 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78qpc\" (UniqueName: \"kubernetes.io/projected/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-kube-api-access-78qpc\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.873836 kubelet[2564]: I0413 20:13:09.873710 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/159b98f2-c9e3-47a8-8a0e-bf8c499a3794-tigera-ca-bundle\") pod \"calico-node-k7v7h\" (UID: \"159b98f2-c9e3-47a8-8a0e-bf8c499a3794\") " pod="calico-system/calico-node-k7v7h" Apr 13 20:13:09.894562 kubelet[2564]: E0413 20:13:09.894511 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvr8v" podUID="484da9bd-407d-408c-b0d2-a512d2d9a654" Apr 13 20:13:09.974559 kubelet[2564]: I0413 20:13:09.974501 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rb7\" (UniqueName: \"kubernetes.io/projected/484da9bd-407d-408c-b0d2-a512d2d9a654-kube-api-access-s4rb7\") pod \"csi-node-driver-hvr8v\" (UID: \"484da9bd-407d-408c-b0d2-a512d2d9a654\") " pod="calico-system/csi-node-driver-hvr8v" Apr 13 20:13:09.974706 kubelet[2564]: I0413 20:13:09.974612 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/484da9bd-407d-408c-b0d2-a512d2d9a654-kubelet-dir\") pod \"csi-node-driver-hvr8v\" (UID: \"484da9bd-407d-408c-b0d2-a512d2d9a654\") " pod="calico-system/csi-node-driver-hvr8v" Apr 13 20:13:09.974706 kubelet[2564]: I0413 20:13:09.974625 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/484da9bd-407d-408c-b0d2-a512d2d9a654-registration-dir\") pod \"csi-node-driver-hvr8v\" (UID: \"484da9bd-407d-408c-b0d2-a512d2d9a654\") " pod="calico-system/csi-node-driver-hvr8v" Apr 13 20:13:09.974706 kubelet[2564]: I0413 20:13:09.974636 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/484da9bd-407d-408c-b0d2-a512d2d9a654-varrun\") pod \"csi-node-driver-hvr8v\" (UID: \"484da9bd-407d-408c-b0d2-a512d2d9a654\") " pod="calico-system/csi-node-driver-hvr8v" Apr 13 20:13:09.974706 kubelet[2564]: I0413 20:13:09.974673 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/484da9bd-407d-408c-b0d2-a512d2d9a654-socket-dir\") pod \"csi-node-driver-hvr8v\" (UID: \"484da9bd-407d-408c-b0d2-a512d2d9a654\") " pod="calico-system/csi-node-driver-hvr8v" Apr 13 20:13:09.982839 kubelet[2564]: E0413 20:13:09.979813 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:09.982839 kubelet[2564]: W0413 20:13:09.979827 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:09.982839 kubelet[2564]: E0413 20:13:09.979843 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:09.985816 kubelet[2564]: E0413 20:13:09.985340 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:09.985816 kubelet[2564]: W0413 20:13:09.985354 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:09.985816 kubelet[2564]: E0413 20:13:09.985368 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.037968 containerd[1502]: time="2026-04-13T20:13:10.037891820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5946f599dd-jxs9j,Uid:f97c36f2-f8a0-407f-bed1-f26355b27d92,Namespace:calico-system,Attempt:0,}" Apr 13 20:13:10.066590 containerd[1502]: time="2026-04-13T20:13:10.064917650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:10.066590 containerd[1502]: time="2026-04-13T20:13:10.065780746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:10.066590 containerd[1502]: time="2026-04-13T20:13:10.065790901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:10.066590 containerd[1502]: time="2026-04-13T20:13:10.065905723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:10.076627 kubelet[2564]: E0413 20:13:10.076352 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.076627 kubelet[2564]: W0413 20:13:10.076380 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.076627 kubelet[2564]: E0413 20:13:10.076444 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.078339 kubelet[2564]: E0413 20:13:10.077738 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.078339 kubelet[2564]: W0413 20:13:10.077790 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.078339 kubelet[2564]: E0413 20:13:10.077809 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.078339 kubelet[2564]: E0413 20:13:10.078301 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.078978 kubelet[2564]: W0413 20:13:10.078667 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.078978 kubelet[2564]: E0413 20:13:10.078701 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.081463 kubelet[2564]: E0413 20:13:10.080792 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.081463 kubelet[2564]: W0413 20:13:10.080814 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.081463 kubelet[2564]: E0413 20:13:10.080837 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.081463 kubelet[2564]: E0413 20:13:10.081333 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.081463 kubelet[2564]: W0413 20:13:10.081349 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.081463 kubelet[2564]: E0413 20:13:10.081367 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.081881 kubelet[2564]: E0413 20:13:10.081845 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.081915 kubelet[2564]: W0413 20:13:10.081893 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.081915 kubelet[2564]: E0413 20:13:10.081906 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.082201 kubelet[2564]: E0413 20:13:10.082184 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.082201 kubelet[2564]: W0413 20:13:10.082194 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.082201 kubelet[2564]: E0413 20:13:10.082201 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.082446 kubelet[2564]: E0413 20:13:10.082424 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.082446 kubelet[2564]: W0413 20:13:10.082433 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.082446 kubelet[2564]: E0413 20:13:10.082439 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.082700 kubelet[2564]: E0413 20:13:10.082689 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.082700 kubelet[2564]: W0413 20:13:10.082698 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.082764 kubelet[2564]: E0413 20:13:10.082704 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.082980 kubelet[2564]: E0413 20:13:10.082956 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.082980 kubelet[2564]: W0413 20:13:10.082966 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.083039 kubelet[2564]: E0413 20:13:10.082986 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.083214 kubelet[2564]: E0413 20:13:10.083199 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.083214 kubelet[2564]: W0413 20:13:10.083209 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.083247 kubelet[2564]: E0413 20:13:10.083215 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085154 kubelet[2564]: E0413 20:13:10.083413 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.085154 kubelet[2564]: W0413 20:13:10.083421 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.085154 kubelet[2564]: E0413 20:13:10.083427 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085154 kubelet[2564]: E0413 20:13:10.083628 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.085154 kubelet[2564]: W0413 20:13:10.083634 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.085154 kubelet[2564]: E0413 20:13:10.083640 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085154 kubelet[2564]: E0413 20:13:10.083904 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.085154 kubelet[2564]: W0413 20:13:10.083910 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.085154 kubelet[2564]: E0413 20:13:10.083917 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085154 kubelet[2564]: E0413 20:13:10.084325 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.085355 kubelet[2564]: W0413 20:13:10.084332 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.085355 kubelet[2564]: E0413 20:13:10.084340 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085355 kubelet[2564]: E0413 20:13:10.084608 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.085355 kubelet[2564]: W0413 20:13:10.084615 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.085355 kubelet[2564]: E0413 20:13:10.084622 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085355 kubelet[2564]: E0413 20:13:10.084874 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.085355 kubelet[2564]: W0413 20:13:10.084881 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.085355 kubelet[2564]: E0413 20:13:10.084887 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085355 kubelet[2564]: E0413 20:13:10.085313 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.085355 kubelet[2564]: W0413 20:13:10.085322 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.085502 kubelet[2564]: E0413 20:13:10.085329 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085683 kubelet[2564]: E0413 20:13:10.085545 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.085683 kubelet[2564]: W0413 20:13:10.085576 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.085683 kubelet[2564]: E0413 20:13:10.085583 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085976 kubelet[2564]: E0413 20:13:10.085801 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.085976 kubelet[2564]: W0413 20:13:10.085809 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.085976 kubelet[2564]: E0413 20:13:10.085816 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.085862 systemd[1]: Started cri-containerd-d8b2a25584fe7addbe8a125080e9670b20c60506976a479c057631ed0028adea.scope - libcontainer container d8b2a25584fe7addbe8a125080e9670b20c60506976a479c057631ed0028adea. Apr 13 20:13:10.086095 kubelet[2564]: E0413 20:13:10.086060 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.086095 kubelet[2564]: W0413 20:13:10.086066 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.086095 kubelet[2564]: E0413 20:13:10.086072 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.086678 kubelet[2564]: E0413 20:13:10.086404 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.086678 kubelet[2564]: W0413 20:13:10.086415 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.086678 kubelet[2564]: E0413 20:13:10.086437 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.086678 kubelet[2564]: E0413 20:13:10.086640 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.086678 kubelet[2564]: W0413 20:13:10.086647 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.086678 kubelet[2564]: E0413 20:13:10.086665 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.087547 kubelet[2564]: E0413 20:13:10.087059 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.087547 kubelet[2564]: W0413 20:13:10.087068 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.087547 kubelet[2564]: E0413 20:13:10.087075 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.087547 kubelet[2564]: E0413 20:13:10.087380 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.087547 kubelet[2564]: W0413 20:13:10.087389 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.087547 kubelet[2564]: E0413 20:13:10.087398 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.090774 containerd[1502]: time="2026-04-13T20:13:10.090679300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k7v7h,Uid:159b98f2-c9e3-47a8-8a0e-bf8c499a3794,Namespace:calico-system,Attempt:0,}" Apr 13 20:13:10.097598 kubelet[2564]: E0413 20:13:10.097546 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:10.097598 kubelet[2564]: W0413 20:13:10.097561 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:10.097598 kubelet[2564]: E0413 20:13:10.097576 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:10.114187 containerd[1502]: time="2026-04-13T20:13:10.113954549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:10.114187 containerd[1502]: time="2026-04-13T20:13:10.114001531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:10.114187 containerd[1502]: time="2026-04-13T20:13:10.114012559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:10.114316 containerd[1502]: time="2026-04-13T20:13:10.114245762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:10.131899 systemd[1]: Started cri-containerd-2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd.scope - libcontainer container 2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd. Apr 13 20:13:10.141029 containerd[1502]: time="2026-04-13T20:13:10.140998457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5946f599dd-jxs9j,Uid:f97c36f2-f8a0-407f-bed1-f26355b27d92,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8b2a25584fe7addbe8a125080e9670b20c60506976a479c057631ed0028adea\"" Apr 13 20:13:10.145584 containerd[1502]: time="2026-04-13T20:13:10.145421083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:13:10.155019 containerd[1502]: time="2026-04-13T20:13:10.154985363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k7v7h,Uid:159b98f2-c9e3-47a8-8a0e-bf8c499a3794,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd\"" Apr 13 20:13:11.261330 kubelet[2564]: E0413 20:13:11.261219 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvr8v" podUID="484da9bd-407d-408c-b0d2-a512d2d9a654" Apr 13 20:13:12.181277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809362031.mount: Deactivated successfully. Apr 13 20:13:13.147946 update_engine[1490]: I20260413 20:13:13.147833 1490 update_attempter.cc:509] Updating boot flags... Apr 13 20:13:13.227803 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3092) Apr 13 20:13:13.268878 kubelet[2564]: E0413 20:13:13.263540 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvr8v" podUID="484da9bd-407d-408c-b0d2-a512d2d9a654" Apr 13 20:13:13.304774 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3093) Apr 13 20:13:13.401280 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3093) Apr 13 20:13:13.711108 containerd[1502]: time="2026-04-13T20:13:13.711052038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:13.712285 containerd[1502]: time="2026-04-13T20:13:13.712179627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:13:13.713202 containerd[1502]: time="2026-04-13T20:13:13.713076152Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:13.714833 containerd[1502]: time="2026-04-13T20:13:13.714793506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:13.715457 containerd[1502]: time="2026-04-13T20:13:13.715193686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.569752032s" Apr 13 20:13:13.715457 containerd[1502]: time="2026-04-13T20:13:13.715215728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:13:13.716660 containerd[1502]: time="2026-04-13T20:13:13.716641899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:13:13.727697 containerd[1502]: time="2026-04-13T20:13:13.727665531Z" level=info msg="CreateContainer within sandbox \"d8b2a25584fe7addbe8a125080e9670b20c60506976a479c057631ed0028adea\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:13:13.750465 containerd[1502]: time="2026-04-13T20:13:13.750428663Z" level=info msg="CreateContainer within sandbox \"d8b2a25584fe7addbe8a125080e9670b20c60506976a479c057631ed0028adea\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f6adb13e0f20d39a68abc884682be0ecaf54d25f1b88cfffe5d59159ed63d5df\"" Apr 13 20:13:13.750996 containerd[1502]: time="2026-04-13T20:13:13.750958057Z" level=info msg="StartContainer for \"f6adb13e0f20d39a68abc884682be0ecaf54d25f1b88cfffe5d59159ed63d5df\"" Apr 13 20:13:13.777894 systemd[1]: Started cri-containerd-f6adb13e0f20d39a68abc884682be0ecaf54d25f1b88cfffe5d59159ed63d5df.scope - libcontainer container f6adb13e0f20d39a68abc884682be0ecaf54d25f1b88cfffe5d59159ed63d5df. Apr 13 20:13:13.818068 containerd[1502]: time="2026-04-13T20:13:13.817958380Z" level=info msg="StartContainer for \"f6adb13e0f20d39a68abc884682be0ecaf54d25f1b88cfffe5d59159ed63d5df\" returns successfully" Apr 13 20:13:14.389107 kubelet[2564]: E0413 20:13:14.388897 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.389107 kubelet[2564]: W0413 20:13:14.388926 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.389107 kubelet[2564]: E0413 20:13:14.388953 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.391612 kubelet[2564]: E0413 20:13:14.390838 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.391612 kubelet[2564]: W0413 20:13:14.390861 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.391612 kubelet[2564]: E0413 20:13:14.390882 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.391612 kubelet[2564]: E0413 20:13:14.391336 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.391612 kubelet[2564]: W0413 20:13:14.391350 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.391612 kubelet[2564]: E0413 20:13:14.391366 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.392398 kubelet[2564]: E0413 20:13:14.392366 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.392398 kubelet[2564]: W0413 20:13:14.392392 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.392540 kubelet[2564]: E0413 20:13:14.392410 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.393107 kubelet[2564]: E0413 20:13:14.393073 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.393107 kubelet[2564]: W0413 20:13:14.393096 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.393235 kubelet[2564]: E0413 20:13:14.393112 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.394562 kubelet[2564]: E0413 20:13:14.394163 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.394562 kubelet[2564]: W0413 20:13:14.394183 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.394562 kubelet[2564]: E0413 20:13:14.394249 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.395177 kubelet[2564]: E0413 20:13:14.395147 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.395483 kubelet[2564]: W0413 20:13:14.395177 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.395483 kubelet[2564]: E0413 20:13:14.395212 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.396357 kubelet[2564]: E0413 20:13:14.396052 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.396357 kubelet[2564]: W0413 20:13:14.396072 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.396357 kubelet[2564]: E0413 20:13:14.396088 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.396935 kubelet[2564]: E0413 20:13:14.396897 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.397039 kubelet[2564]: W0413 20:13:14.396976 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.397039 kubelet[2564]: E0413 20:13:14.396992 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.397674 kubelet[2564]: E0413 20:13:14.397615 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.397674 kubelet[2564]: W0413 20:13:14.397633 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.397674 kubelet[2564]: E0413 20:13:14.397647 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.398363 kubelet[2564]: E0413 20:13:14.398314 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.398363 kubelet[2564]: W0413 20:13:14.398342 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.398363 kubelet[2564]: E0413 20:13:14.398362 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.398926 kubelet[2564]: E0413 20:13:14.398888 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.398926 kubelet[2564]: W0413 20:13:14.398911 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.398926 kubelet[2564]: E0413 20:13:14.398927 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.399583 kubelet[2564]: E0413 20:13:14.399385 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.399583 kubelet[2564]: W0413 20:13:14.399407 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.399583 kubelet[2564]: E0413 20:13:14.399425 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.399954 kubelet[2564]: E0413 20:13:14.399919 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.399954 kubelet[2564]: W0413 20:13:14.399941 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.400093 kubelet[2564]: E0413 20:13:14.399961 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.400487 kubelet[2564]: E0413 20:13:14.400371 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.400487 kubelet[2564]: W0413 20:13:14.400395 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.400487 kubelet[2564]: E0413 20:13:14.400412 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.417159 kubelet[2564]: E0413 20:13:14.417101 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.417278 kubelet[2564]: W0413 20:13:14.417174 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.417278 kubelet[2564]: E0413 20:13:14.417200 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.418073 kubelet[2564]: E0413 20:13:14.417834 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.418073 kubelet[2564]: W0413 20:13:14.417855 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.418073 kubelet[2564]: E0413 20:13:14.417872 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.419389 kubelet[2564]: E0413 20:13:14.419350 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.419389 kubelet[2564]: W0413 20:13:14.419375 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.419389 kubelet[2564]: E0413 20:13:14.419392 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.420079 kubelet[2564]: E0413 20:13:14.420045 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.420079 kubelet[2564]: W0413 20:13:14.420067 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.420184 kubelet[2564]: E0413 20:13:14.420083 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.420647 kubelet[2564]: E0413 20:13:14.420605 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.420647 kubelet[2564]: W0413 20:13:14.420631 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.420809 kubelet[2564]: E0413 20:13:14.420650 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.421269 kubelet[2564]: E0413 20:13:14.421235 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.421269 kubelet[2564]: W0413 20:13:14.421257 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.421363 kubelet[2564]: E0413 20:13:14.421274 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.421851 kubelet[2564]: E0413 20:13:14.421832 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.421912 kubelet[2564]: W0413 20:13:14.421851 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.421912 kubelet[2564]: E0413 20:13:14.421867 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.422461 kubelet[2564]: E0413 20:13:14.422427 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.422461 kubelet[2564]: W0413 20:13:14.422452 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.422563 kubelet[2564]: E0413 20:13:14.422514 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.423245 kubelet[2564]: E0413 20:13:14.423209 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.423245 kubelet[2564]: W0413 20:13:14.423232 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.423335 kubelet[2564]: E0413 20:13:14.423251 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.423894 kubelet[2564]: E0413 20:13:14.423855 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.423894 kubelet[2564]: W0413 20:13:14.423880 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.423894 kubelet[2564]: E0413 20:13:14.423897 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.424513 kubelet[2564]: E0413 20:13:14.424479 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.424513 kubelet[2564]: W0413 20:13:14.424503 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.424615 kubelet[2564]: E0413 20:13:14.424517 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.425094 kubelet[2564]: E0413 20:13:14.425061 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.425094 kubelet[2564]: W0413 20:13:14.425082 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.425180 kubelet[2564]: E0413 20:13:14.425098 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.425664 kubelet[2564]: E0413 20:13:14.425625 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.425664 kubelet[2564]: W0413 20:13:14.425656 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.425747 kubelet[2564]: E0413 20:13:14.425675 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.426302 kubelet[2564]: E0413 20:13:14.426267 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.426302 kubelet[2564]: W0413 20:13:14.426291 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.426423 kubelet[2564]: E0413 20:13:14.426306 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.428809 kubelet[2564]: E0413 20:13:14.427441 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.428809 kubelet[2564]: W0413 20:13:14.427470 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.428809 kubelet[2564]: E0413 20:13:14.427488 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.428809 kubelet[2564]: E0413 20:13:14.428164 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.428809 kubelet[2564]: W0413 20:13:14.428180 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.428809 kubelet[2564]: E0413 20:13:14.428197 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.429632 kubelet[2564]: E0413 20:13:14.429602 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.429729 kubelet[2564]: W0413 20:13:14.429667 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.429729 kubelet[2564]: E0413 20:13:14.429684 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:14.434711 kubelet[2564]: E0413 20:13:14.434681 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:14.434880 kubelet[2564]: W0413 20:13:14.434857 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:14.434963 kubelet[2564]: E0413 20:13:14.434945 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.261130 kubelet[2564]: E0413 20:13:15.261035 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvr8v" podUID="484da9bd-407d-408c-b0d2-a512d2d9a654" Apr 13 20:13:15.346708 kubelet[2564]: I0413 20:13:15.346147 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:13:15.405172 kubelet[2564]: E0413 20:13:15.404941 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.405172 kubelet[2564]: W0413 20:13:15.404980 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.405172 kubelet[2564]: E0413 20:13:15.405009 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.406240 kubelet[2564]: E0413 20:13:15.405895 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.406240 kubelet[2564]: W0413 20:13:15.405913 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.406240 kubelet[2564]: E0413 20:13:15.405933 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.407094 kubelet[2564]: E0413 20:13:15.406802 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.407094 kubelet[2564]: W0413 20:13:15.406822 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.407094 kubelet[2564]: E0413 20:13:15.406840 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.407817 kubelet[2564]: E0413 20:13:15.407576 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.407817 kubelet[2564]: W0413 20:13:15.407597 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.407817 kubelet[2564]: E0413 20:13:15.407613 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.408719 kubelet[2564]: E0413 20:13:15.408699 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.409147 kubelet[2564]: W0413 20:13:15.408831 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.409147 kubelet[2564]: E0413 20:13:15.408850 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.409392 kubelet[2564]: E0413 20:13:15.409304 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.409392 kubelet[2564]: W0413 20:13:15.409319 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.409392 kubelet[2564]: E0413 20:13:15.409332 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.409801 kubelet[2564]: E0413 20:13:15.409748 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.409998 kubelet[2564]: W0413 20:13:15.409908 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.409998 kubelet[2564]: E0413 20:13:15.409927 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.410397 kubelet[2564]: E0413 20:13:15.410381 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.410467 kubelet[2564]: W0413 20:13:15.410454 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.410522 kubelet[2564]: E0413 20:13:15.410509 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.410960 kubelet[2564]: E0413 20:13:15.410943 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.411143 kubelet[2564]: W0413 20:13:15.411059 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.411143 kubelet[2564]: E0413 20:13:15.411077 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.411651 kubelet[2564]: E0413 20:13:15.411518 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.411651 kubelet[2564]: W0413 20:13:15.411532 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.411651 kubelet[2564]: E0413 20:13:15.411545 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.412073 kubelet[2564]: E0413 20:13:15.412057 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.412179 kubelet[2564]: W0413 20:13:15.412160 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.412387 kubelet[2564]: E0413 20:13:15.412251 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.412726 kubelet[2564]: E0413 20:13:15.412706 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.412955 kubelet[2564]: W0413 20:13:15.412832 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.412955 kubelet[2564]: E0413 20:13:15.412854 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.413516 kubelet[2564]: E0413 20:13:15.413380 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.413516 kubelet[2564]: W0413 20:13:15.413396 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.413516 kubelet[2564]: E0413 20:13:15.413408 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.414063 kubelet[2564]: E0413 20:13:15.413931 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.414063 kubelet[2564]: W0413 20:13:15.413945 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.414063 kubelet[2564]: E0413 20:13:15.413957 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.414488 kubelet[2564]: E0413 20:13:15.414401 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.414488 kubelet[2564]: W0413 20:13:15.414415 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.414488 kubelet[2564]: E0413 20:13:15.414427 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.424414 kubelet[2564]: E0413 20:13:15.424239 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.424414 kubelet[2564]: W0413 20:13:15.424257 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.424414 kubelet[2564]: E0413 20:13:15.424272 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.424884 kubelet[2564]: E0413 20:13:15.424867 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.425092 kubelet[2564]: W0413 20:13:15.424961 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.425092 kubelet[2564]: E0413 20:13:15.424995 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.425723 kubelet[2564]: E0413 20:13:15.425547 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.425723 kubelet[2564]: W0413 20:13:15.425562 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.425723 kubelet[2564]: E0413 20:13:15.425575 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.426294 kubelet[2564]: E0413 20:13:15.426169 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.426294 kubelet[2564]: W0413 20:13:15.426184 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.426294 kubelet[2564]: E0413 20:13:15.426197 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.426915 kubelet[2564]: E0413 20:13:15.426736 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.426915 kubelet[2564]: W0413 20:13:15.426779 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.426915 kubelet[2564]: E0413 20:13:15.426794 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.427418 kubelet[2564]: E0413 20:13:15.427292 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.427418 kubelet[2564]: W0413 20:13:15.427306 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.427418 kubelet[2564]: E0413 20:13:15.427318 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.427998 kubelet[2564]: E0413 20:13:15.427862 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.427998 kubelet[2564]: W0413 20:13:15.427877 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.427998 kubelet[2564]: E0413 20:13:15.427890 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.429388 kubelet[2564]: E0413 20:13:15.429152 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.429388 kubelet[2564]: W0413 20:13:15.429204 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.429388 kubelet[2564]: E0413 20:13:15.429218 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.430220 kubelet[2564]: E0413 20:13:15.430080 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.430220 kubelet[2564]: W0413 20:13:15.430096 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.430220 kubelet[2564]: E0413 20:13:15.430111 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.430642 kubelet[2564]: E0413 20:13:15.430627 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.431150 kubelet[2564]: W0413 20:13:15.430716 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.431150 kubelet[2564]: E0413 20:13:15.430733 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.431376 kubelet[2564]: E0413 20:13:15.431368 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.431416 kubelet[2564]: W0413 20:13:15.431409 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.431444 kubelet[2564]: E0413 20:13:15.431438 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.431934 kubelet[2564]: E0413 20:13:15.431903 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.431934 kubelet[2564]: W0413 20:13:15.431911 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.431934 kubelet[2564]: E0413 20:13:15.431918 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.432407 kubelet[2564]: E0413 20:13:15.432398 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.432513 kubelet[2564]: W0413 20:13:15.432443 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.432513 kubelet[2564]: E0413 20:13:15.432451 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.433233 kubelet[2564]: E0413 20:13:15.433224 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.433358 kubelet[2564]: W0413 20:13:15.433278 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.433358 kubelet[2564]: E0413 20:13:15.433287 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.433629 kubelet[2564]: E0413 20:13:15.433621 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.433681 kubelet[2564]: W0413 20:13:15.433664 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.433681 kubelet[2564]: E0413 20:13:15.433673 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.434290 kubelet[2564]: E0413 20:13:15.434163 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.434290 kubelet[2564]: W0413 20:13:15.434172 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.434290 kubelet[2564]: E0413 20:13:15.434181 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.434732 kubelet[2564]: E0413 20:13:15.434550 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.434732 kubelet[2564]: W0413 20:13:15.434558 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.434732 kubelet[2564]: E0413 20:13:15.434565 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.435233 kubelet[2564]: E0413 20:13:15.435203 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:13:15.435233 kubelet[2564]: W0413 20:13:15.435212 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:13:15.435233 kubelet[2564]: E0413 20:13:15.435219 2564 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:13:15.456300 containerd[1502]: time="2026-04-13T20:13:15.456253727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:15.457289 containerd[1502]: time="2026-04-13T20:13:15.457128224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:13:15.458591 containerd[1502]: time="2026-04-13T20:13:15.458150018Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:15.459938 containerd[1502]: time="2026-04-13T20:13:15.459910395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:15.460784 containerd[1502]: time="2026-04-13T20:13:15.460489572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.743825563s" Apr 13 20:13:15.460784 containerd[1502]: time="2026-04-13T20:13:15.460515624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:13:15.463891 containerd[1502]: time="2026-04-13T20:13:15.463869909Z" level=info msg="CreateContainer within sandbox \"2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:13:15.486942 containerd[1502]: time="2026-04-13T20:13:15.486899632Z" level=info msg="CreateContainer within sandbox \"2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bce0ec3a5487f0cafc18897a3d2721b4c7b07037b5461d7b02d0c01bab84e60f\"" Apr 13 20:13:15.487519 containerd[1502]: time="2026-04-13T20:13:15.487500427Z" level=info msg="StartContainer for \"bce0ec3a5487f0cafc18897a3d2721b4c7b07037b5461d7b02d0c01bab84e60f\"" Apr 13 20:13:15.515901 systemd[1]: Started cri-containerd-bce0ec3a5487f0cafc18897a3d2721b4c7b07037b5461d7b02d0c01bab84e60f.scope - libcontainer container bce0ec3a5487f0cafc18897a3d2721b4c7b07037b5461d7b02d0c01bab84e60f. Apr 13 20:13:15.540548 containerd[1502]: time="2026-04-13T20:13:15.540511880Z" level=info msg="StartContainer for \"bce0ec3a5487f0cafc18897a3d2721b4c7b07037b5461d7b02d0c01bab84e60f\" returns successfully" Apr 13 20:13:15.550635 systemd[1]: cri-containerd-bce0ec3a5487f0cafc18897a3d2721b4c7b07037b5461d7b02d0c01bab84e60f.scope: Deactivated successfully. Apr 13 20:13:15.567873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bce0ec3a5487f0cafc18897a3d2721b4c7b07037b5461d7b02d0c01bab84e60f-rootfs.mount: Deactivated successfully. Apr 13 20:13:15.669301 containerd[1502]: time="2026-04-13T20:13:15.669238664Z" level=info msg="shim disconnected" id=bce0ec3a5487f0cafc18897a3d2721b4c7b07037b5461d7b02d0c01bab84e60f namespace=k8s.io Apr 13 20:13:15.669509 containerd[1502]: time="2026-04-13T20:13:15.669487430Z" level=warning msg="cleaning up after shim disconnected" id=bce0ec3a5487f0cafc18897a3d2721b4c7b07037b5461d7b02d0c01bab84e60f namespace=k8s.io Apr 13 20:13:15.669509 containerd[1502]: time="2026-04-13T20:13:15.669500033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:13:16.354072 containerd[1502]: time="2026-04-13T20:13:16.354001156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:13:16.385468 kubelet[2564]: I0413 20:13:16.385266 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5946f599dd-jxs9j" podStartSLOduration=3.813718691 podStartE2EDuration="7.385237703s" podCreationTimestamp="2026-04-13 20:13:09 +0000 UTC" firstStartedPulling="2026-04-13 20:13:10.144537818 +0000 UTC m=+17.963954525" lastFinishedPulling="2026-04-13 20:13:13.71605683 +0000 UTC m=+21.535473537" observedRunningTime="2026-04-13 20:13:14.354865714 +0000 UTC m=+22.174282410" watchObservedRunningTime="2026-04-13 20:13:16.385237703 +0000 UTC m=+24.204654450" Apr 13 20:13:17.260739 kubelet[2564]: E0413 20:13:17.260646 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvr8v" podUID="484da9bd-407d-408c-b0d2-a512d2d9a654" Apr 13 20:13:19.261262 kubelet[2564]: E0413 20:13:19.261101 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvr8v" podUID="484da9bd-407d-408c-b0d2-a512d2d9a654" Apr 13 20:13:20.599619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706717540.mount: Deactivated successfully. Apr 13 20:13:20.627901 containerd[1502]: time="2026-04-13T20:13:20.627843262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:20.629101 containerd[1502]: time="2026-04-13T20:13:20.628969753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:13:20.630180 containerd[1502]: time="2026-04-13T20:13:20.630132211Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:20.632198 containerd[1502]: time="2026-04-13T20:13:20.632161384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:20.633514 containerd[1502]: time="2026-04-13T20:13:20.632992032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.278936976s" Apr 13 20:13:20.633514 containerd[1502]: time="2026-04-13T20:13:20.633017188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:13:20.636888 containerd[1502]: time="2026-04-13T20:13:20.636856340Z" level=info msg="CreateContainer within sandbox \"2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:13:20.652252 containerd[1502]: time="2026-04-13T20:13:20.652211364Z" level=info msg="CreateContainer within sandbox \"2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9063dc9947827f78cf5609a6ae6a6eb13e942f42ca25e9590f968b7e621af3fa\"" Apr 13 20:13:20.652872 containerd[1502]: time="2026-04-13T20:13:20.652850292Z" level=info msg="StartContainer for \"9063dc9947827f78cf5609a6ae6a6eb13e942f42ca25e9590f968b7e621af3fa\"" Apr 13 20:13:20.690575 systemd[1]: Started cri-containerd-9063dc9947827f78cf5609a6ae6a6eb13e942f42ca25e9590f968b7e621af3fa.scope - libcontainer container 9063dc9947827f78cf5609a6ae6a6eb13e942f42ca25e9590f968b7e621af3fa. Apr 13 20:13:20.716609 containerd[1502]: time="2026-04-13T20:13:20.716571394Z" level=info msg="StartContainer for \"9063dc9947827f78cf5609a6ae6a6eb13e942f42ca25e9590f968b7e621af3fa\" returns successfully" Apr 13 20:13:20.749056 systemd[1]: cri-containerd-9063dc9947827f78cf5609a6ae6a6eb13e942f42ca25e9590f968b7e621af3fa.scope: Deactivated successfully. Apr 13 20:13:20.838608 containerd[1502]: time="2026-04-13T20:13:20.838518611Z" level=info msg="shim disconnected" id=9063dc9947827f78cf5609a6ae6a6eb13e942f42ca25e9590f968b7e621af3fa namespace=k8s.io Apr 13 20:13:20.838990 containerd[1502]: time="2026-04-13T20:13:20.838814336Z" level=warning msg="cleaning up after shim disconnected" id=9063dc9947827f78cf5609a6ae6a6eb13e942f42ca25e9590f968b7e621af3fa namespace=k8s.io Apr 13 20:13:20.838990 containerd[1502]: time="2026-04-13T20:13:20.838826553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:13:20.854052 containerd[1502]: time="2026-04-13T20:13:20.853868315Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:13:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:13:21.260718 kubelet[2564]: E0413 20:13:21.260488 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvr8v" podUID="484da9bd-407d-408c-b0d2-a512d2d9a654" Apr 13 20:13:21.366436 containerd[1502]: time="2026-04-13T20:13:21.366355696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:13:21.602447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9063dc9947827f78cf5609a6ae6a6eb13e942f42ca25e9590f968b7e621af3fa-rootfs.mount: Deactivated successfully. Apr 13 20:13:23.261048 kubelet[2564]: E0413 20:13:23.260989 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hvr8v" podUID="484da9bd-407d-408c-b0d2-a512d2d9a654" Apr 13 20:13:24.091608 containerd[1502]: time="2026-04-13T20:13:24.091553340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:24.092760 containerd[1502]: time="2026-04-13T20:13:24.092624231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:13:24.094525 containerd[1502]: time="2026-04-13T20:13:24.093563728Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:24.095375 containerd[1502]: time="2026-04-13T20:13:24.095350345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:24.095963 containerd[1502]: time="2026-04-13T20:13:24.095936263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.729526755s" Apr 13 20:13:24.096007 containerd[1502]: time="2026-04-13T20:13:24.095965829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:13:24.100339 containerd[1502]: time="2026-04-13T20:13:24.100306337Z" level=info msg="CreateContainer within sandbox \"2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:13:24.119810 containerd[1502]: time="2026-04-13T20:13:24.119735240Z" level=info msg="CreateContainer within sandbox \"2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9f8e7e3fe8a290c16dfeb44d88888bb01257439cfdf2169e1adef6aadc7d7011\"" Apr 13 20:13:24.121121 containerd[1502]: time="2026-04-13T20:13:24.120222878Z" level=info msg="StartContainer for \"9f8e7e3fe8a290c16dfeb44d88888bb01257439cfdf2169e1adef6aadc7d7011\"" Apr 13 20:13:24.146185 systemd[1]: Started cri-containerd-9f8e7e3fe8a290c16dfeb44d88888bb01257439cfdf2169e1adef6aadc7d7011.scope - libcontainer container 9f8e7e3fe8a290c16dfeb44d88888bb01257439cfdf2169e1adef6aadc7d7011. Apr 13 20:13:24.171585 containerd[1502]: time="2026-04-13T20:13:24.171545165Z" level=info msg="StartContainer for \"9f8e7e3fe8a290c16dfeb44d88888bb01257439cfdf2169e1adef6aadc7d7011\" returns successfully" Apr 13 20:13:24.631232 systemd[1]: cri-containerd-9f8e7e3fe8a290c16dfeb44d88888bb01257439cfdf2169e1adef6aadc7d7011.scope: Deactivated successfully. Apr 13 20:13:24.649642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f8e7e3fe8a290c16dfeb44d88888bb01257439cfdf2169e1adef6aadc7d7011-rootfs.mount: Deactivated successfully. Apr 13 20:13:24.698050 containerd[1502]: time="2026-04-13T20:13:24.697969197Z" level=info msg="shim disconnected" id=9f8e7e3fe8a290c16dfeb44d88888bb01257439cfdf2169e1adef6aadc7d7011 namespace=k8s.io Apr 13 20:13:24.698050 containerd[1502]: time="2026-04-13T20:13:24.698028268Z" level=warning msg="cleaning up after shim disconnected" id=9f8e7e3fe8a290c16dfeb44d88888bb01257439cfdf2169e1adef6aadc7d7011 namespace=k8s.io Apr 13 20:13:24.698050 containerd[1502]: time="2026-04-13T20:13:24.698035525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:13:24.703140 kubelet[2564]: I0413 20:13:24.703109 2564 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 20:13:24.746902 systemd[1]: Created slice kubepods-burstable-podbbf82d0e_deb2_40ec_a573_b417e42188bc.slice - libcontainer container kubepods-burstable-podbbf82d0e_deb2_40ec_a573_b417e42188bc.slice. Apr 13 20:13:24.756208 systemd[1]: Created slice kubepods-besteffort-pod1c1bb0e0_ee02_473c_a263_cdfa973e52e6.slice - libcontainer container kubepods-besteffort-pod1c1bb0e0_ee02_473c_a263_cdfa973e52e6.slice. Apr 13 20:13:24.767273 systemd[1]: Created slice kubepods-burstable-pod650a7fe1_f630_4ffa_8ebb_7c8ab54e8781.slice - libcontainer container kubepods-burstable-pod650a7fe1_f630_4ffa_8ebb_7c8ab54e8781.slice. Apr 13 20:13:24.773559 systemd[1]: Created slice kubepods-besteffort-pod5925fe08_2006_453b_ae7e_b5697562e697.slice - libcontainer container kubepods-besteffort-pod5925fe08_2006_453b_ae7e_b5697562e697.slice. Apr 13 20:13:24.779189 systemd[1]: Created slice kubepods-besteffort-podcd1afe66_21c5_4bb5_bb7b_16dd69973766.slice - libcontainer container kubepods-besteffort-podcd1afe66_21c5_4bb5_bb7b_16dd69973766.slice. Apr 13 20:13:24.785863 systemd[1]: Created slice kubepods-besteffort-pod487ea9ef_6892_48d7_a2fa_05a0f1fc06fa.slice - libcontainer container kubepods-besteffort-pod487ea9ef_6892_48d7_a2fa_05a0f1fc06fa.slice. Apr 13 20:13:24.790251 systemd[1]: Created slice kubepods-besteffort-poda4e90b81_acc5_4fe5_b623_c177b554394d.slice - libcontainer container kubepods-besteffort-poda4e90b81_acc5_4fe5_b623_c177b554394d.slice. Apr 13 20:13:24.797187 kubelet[2564]: I0413 20:13:24.797162 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9zqm\" (UniqueName: \"kubernetes.io/projected/a4e90b81-acc5-4fe5-b623-c177b554394d-kube-api-access-t9zqm\") pod \"calico-kube-controllers-6c74bf58b8-pnfc5\" (UID: \"a4e90b81-acc5-4fe5-b623-c177b554394d\") " pod="calico-system/calico-kube-controllers-6c74bf58b8-pnfc5" Apr 13 20:13:24.797327 kubelet[2564]: I0413 20:13:24.797315 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c1bb0e0-ee02-473c-a263-cdfa973e52e6-calico-apiserver-certs\") pod \"calico-apiserver-75d956895-66fp9\" (UID: \"1c1bb0e0-ee02-473c-a263-cdfa973e52e6\") " pod="calico-system/calico-apiserver-75d956895-66fp9" Apr 13 20:13:24.797415 kubelet[2564]: I0413 20:13:24.797405 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/650a7fe1-f630-4ffa-8ebb-7c8ab54e8781-config-volume\") pod \"coredns-674b8bbfcf-fvx68\" (UID: \"650a7fe1-f630-4ffa-8ebb-7c8ab54e8781\") " pod="kube-system/coredns-674b8bbfcf-fvx68" Apr 13 20:13:24.797510 kubelet[2564]: I0413 20:13:24.797501 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd1afe66-21c5-4bb5-bb7b-16dd69973766-whisker-ca-bundle\") pod \"whisker-f5f8f9646-r8pmz\" (UID: \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\") " pod="calico-system/whisker-f5f8f9646-r8pmz" Apr 13 20:13:24.797583 kubelet[2564]: I0413 20:13:24.797572 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e90b81-acc5-4fe5-b623-c177b554394d-tigera-ca-bundle\") pod \"calico-kube-controllers-6c74bf58b8-pnfc5\" (UID: \"a4e90b81-acc5-4fe5-b623-c177b554394d\") " pod="calico-system/calico-kube-controllers-6c74bf58b8-pnfc5" Apr 13 20:13:24.797776 kubelet[2564]: I0413 20:13:24.797765 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cd1afe66-21c5-4bb5-bb7b-16dd69973766-nginx-config\") pod \"whisker-f5f8f9646-r8pmz\" (UID: \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\") " pod="calico-system/whisker-f5f8f9646-r8pmz" Apr 13 20:13:24.797854 kubelet[2564]: I0413 20:13:24.797846 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/487ea9ef-6892-48d7-a2fa-05a0f1fc06fa-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-c6ngr\" (UID: \"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa\") " pod="calico-system/goldmane-5b85766d88-c6ngr" Apr 13 20:13:24.797941 kubelet[2564]: I0413 20:13:24.797929 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnz7v\" (UniqueName: \"kubernetes.io/projected/1c1bb0e0-ee02-473c-a263-cdfa973e52e6-kube-api-access-hnz7v\") pod \"calico-apiserver-75d956895-66fp9\" (UID: \"1c1bb0e0-ee02-473c-a263-cdfa973e52e6\") " pod="calico-system/calico-apiserver-75d956895-66fp9" Apr 13 20:13:24.798025 kubelet[2564]: I0413 20:13:24.798016 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf82d0e-deb2-40ec-a573-b417e42188bc-config-volume\") pod \"coredns-674b8bbfcf-dr5n9\" (UID: \"bbf82d0e-deb2-40ec-a573-b417e42188bc\") " pod="kube-system/coredns-674b8bbfcf-dr5n9" Apr 13 20:13:24.798101 kubelet[2564]: I0413 20:13:24.798093 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8897\" (UniqueName: \"kubernetes.io/projected/bbf82d0e-deb2-40ec-a573-b417e42188bc-kube-api-access-t8897\") pod \"coredns-674b8bbfcf-dr5n9\" (UID: \"bbf82d0e-deb2-40ec-a573-b417e42188bc\") " pod="kube-system/coredns-674b8bbfcf-dr5n9" Apr 13 20:13:24.798174 kubelet[2564]: I0413 20:13:24.798166 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cd1afe66-21c5-4bb5-bb7b-16dd69973766-whisker-backend-key-pair\") pod \"whisker-f5f8f9646-r8pmz\" (UID: \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\") " pod="calico-system/whisker-f5f8f9646-r8pmz" Apr 13 20:13:24.798230 kubelet[2564]: I0413 20:13:24.798211 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk9cl\" (UniqueName: \"kubernetes.io/projected/cd1afe66-21c5-4bb5-bb7b-16dd69973766-kube-api-access-vk9cl\") pod \"whisker-f5f8f9646-r8pmz\" (UID: \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\") " pod="calico-system/whisker-f5f8f9646-r8pmz" Apr 13 20:13:24.798296 kubelet[2564]: I0413 20:13:24.798287 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8mx8\" (UniqueName: \"kubernetes.io/projected/650a7fe1-f630-4ffa-8ebb-7c8ab54e8781-kube-api-access-m8mx8\") pod \"coredns-674b8bbfcf-fvx68\" (UID: \"650a7fe1-f630-4ffa-8ebb-7c8ab54e8781\") " pod="kube-system/coredns-674b8bbfcf-fvx68" Apr 13 20:13:24.798357 kubelet[2564]: I0413 20:13:24.798349 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5925fe08-2006-453b-ae7e-b5697562e697-calico-apiserver-certs\") pod \"calico-apiserver-75d956895-42kqd\" (UID: \"5925fe08-2006-453b-ae7e-b5697562e697\") " pod="calico-system/calico-apiserver-75d956895-42kqd" Apr 13 20:13:24.798408 kubelet[2564]: I0413 20:13:24.798401 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66sj6\" (UniqueName: \"kubernetes.io/projected/5925fe08-2006-453b-ae7e-b5697562e697-kube-api-access-66sj6\") pod \"calico-apiserver-75d956895-42kqd\" (UID: \"5925fe08-2006-453b-ae7e-b5697562e697\") " pod="calico-system/calico-apiserver-75d956895-42kqd" Apr 13 20:13:24.798456 kubelet[2564]: I0413 20:13:24.798449 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/487ea9ef-6892-48d7-a2fa-05a0f1fc06fa-config\") pod \"goldmane-5b85766d88-c6ngr\" (UID: \"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa\") " pod="calico-system/goldmane-5b85766d88-c6ngr" Apr 13 20:13:24.798521 kubelet[2564]: I0413 20:13:24.798512 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/487ea9ef-6892-48d7-a2fa-05a0f1fc06fa-goldmane-key-pair\") pod \"goldmane-5b85766d88-c6ngr\" (UID: \"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa\") " pod="calico-system/goldmane-5b85766d88-c6ngr" Apr 13 20:13:24.798558 kubelet[2564]: I0413 20:13:24.798551 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxf8\" (UniqueName: \"kubernetes.io/projected/487ea9ef-6892-48d7-a2fa-05a0f1fc06fa-kube-api-access-4wxf8\") pod \"goldmane-5b85766d88-c6ngr\" (UID: \"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa\") " pod="calico-system/goldmane-5b85766d88-c6ngr" Apr 13 20:13:25.054063 containerd[1502]: time="2026-04-13T20:13:25.053899314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dr5n9,Uid:bbf82d0e-deb2-40ec-a573-b417e42188bc,Namespace:kube-system,Attempt:0,}" Apr 13 20:13:25.063056 containerd[1502]: time="2026-04-13T20:13:25.062515396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d956895-66fp9,Uid:1c1bb0e0-ee02-473c-a263-cdfa973e52e6,Namespace:calico-system,Attempt:0,}" Apr 13 20:13:25.071535 containerd[1502]: time="2026-04-13T20:13:25.071451664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvx68,Uid:650a7fe1-f630-4ffa-8ebb-7c8ab54e8781,Namespace:kube-system,Attempt:0,}" Apr 13 20:13:25.077968 containerd[1502]: time="2026-04-13T20:13:25.077916203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d956895-42kqd,Uid:5925fe08-2006-453b-ae7e-b5697562e697,Namespace:calico-system,Attempt:0,}" Apr 13 20:13:25.092952 containerd[1502]: time="2026-04-13T20:13:25.092825059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-c6ngr,Uid:487ea9ef-6892-48d7-a2fa-05a0f1fc06fa,Namespace:calico-system,Attempt:0,}" Apr 13 20:13:25.096225 containerd[1502]: time="2026-04-13T20:13:25.095383928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f5f8f9646-r8pmz,Uid:cd1afe66-21c5-4bb5-bb7b-16dd69973766,Namespace:calico-system,Attempt:0,}" Apr 13 20:13:25.096225 containerd[1502]: time="2026-04-13T20:13:25.095872682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c74bf58b8-pnfc5,Uid:a4e90b81-acc5-4fe5-b623-c177b554394d,Namespace:calico-system,Attempt:0,}" Apr 13 20:13:25.267801 systemd[1]: Created slice kubepods-besteffort-pod484da9bd_407d_408c_b0d2_a512d2d9a654.slice - libcontainer container kubepods-besteffort-pod484da9bd_407d_408c_b0d2_a512d2d9a654.slice. Apr 13 20:13:25.271767 containerd[1502]: time="2026-04-13T20:13:25.270110188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvr8v,Uid:484da9bd-407d-408c-b0d2-a512d2d9a654,Namespace:calico-system,Attempt:0,}" Apr 13 20:13:25.284402 containerd[1502]: time="2026-04-13T20:13:25.284369785Z" level=error msg="Failed to destroy network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.284992 containerd[1502]: time="2026-04-13T20:13:25.284957443Z" level=error msg="encountered an error cleaning up failed sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.285088 containerd[1502]: time="2026-04-13T20:13:25.285059131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvx68,Uid:650a7fe1-f630-4ffa-8ebb-7c8ab54e8781,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.285368 kubelet[2564]: E0413 20:13:25.285329 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.285421 kubelet[2564]: E0413 20:13:25.285386 2564 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fvx68" Apr 13 20:13:25.285421 kubelet[2564]: E0413 20:13:25.285406 2564 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fvx68" Apr 13 20:13:25.285461 kubelet[2564]: E0413 20:13:25.285444 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fvx68_kube-system(650a7fe1-f630-4ffa-8ebb-7c8ab54e8781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fvx68_kube-system(650a7fe1-f630-4ffa-8ebb-7c8ab54e8781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fvx68" podUID="650a7fe1-f630-4ffa-8ebb-7c8ab54e8781" Apr 13 20:13:25.291569 containerd[1502]: time="2026-04-13T20:13:25.291536586Z" level=error msg="Failed to destroy network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.294716 containerd[1502]: time="2026-04-13T20:13:25.294671149Z" level=error msg="encountered an error cleaning up failed sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.295299 containerd[1502]: time="2026-04-13T20:13:25.295280540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dr5n9,Uid:bbf82d0e-deb2-40ec-a573-b417e42188bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.295600 kubelet[2564]: E0413 20:13:25.295557 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.295650 kubelet[2564]: E0413 20:13:25.295631 2564 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dr5n9" Apr 13 20:13:25.295673 kubelet[2564]: E0413 20:13:25.295650 2564 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dr5n9" Apr 13 20:13:25.295915 kubelet[2564]: E0413 20:13:25.295690 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dr5n9_kube-system(bbf82d0e-deb2-40ec-a573-b417e42188bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dr5n9_kube-system(bbf82d0e-deb2-40ec-a573-b417e42188bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dr5n9" podUID="bbf82d0e-deb2-40ec-a573-b417e42188bc" Apr 13 20:13:25.326959 containerd[1502]: time="2026-04-13T20:13:25.326918016Z" level=error msg="Failed to destroy network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.327723 containerd[1502]: time="2026-04-13T20:13:25.327693621Z" level=error msg="encountered an error cleaning up failed sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.328557 containerd[1502]: time="2026-04-13T20:13:25.327740202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d956895-42kqd,Uid:5925fe08-2006-453b-ae7e-b5697562e697,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.328950 kubelet[2564]: E0413 20:13:25.328796 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.328950 kubelet[2564]: E0413 20:13:25.328849 2564 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75d956895-42kqd" Apr 13 20:13:25.328950 kubelet[2564]: E0413 20:13:25.328867 2564 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75d956895-42kqd" Apr 13 20:13:25.329041 kubelet[2564]: E0413 20:13:25.328919 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75d956895-42kqd_calico-system(5925fe08-2006-453b-ae7e-b5697562e697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75d956895-42kqd_calico-system(5925fe08-2006-453b-ae7e-b5697562e697)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75d956895-42kqd" podUID="5925fe08-2006-453b-ae7e-b5697562e697" Apr 13 20:13:25.350049 containerd[1502]: time="2026-04-13T20:13:25.349997102Z" level=error msg="Failed to destroy network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.350526 containerd[1502]: time="2026-04-13T20:13:25.350473564Z" level=error msg="encountered an error cleaning up failed sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.350635 containerd[1502]: time="2026-04-13T20:13:25.350518301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d956895-66fp9,Uid:1c1bb0e0-ee02-473c-a263-cdfa973e52e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.351618 kubelet[2564]: E0413 20:13:25.350837 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.351618 kubelet[2564]: E0413 20:13:25.350885 2564 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75d956895-66fp9" Apr 13 20:13:25.351618 kubelet[2564]: E0413 20:13:25.350905 2564 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75d956895-66fp9" Apr 13 20:13:25.351720 kubelet[2564]: E0413 20:13:25.350946 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75d956895-66fp9_calico-system(1c1bb0e0-ee02-473c-a263-cdfa973e52e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75d956895-66fp9_calico-system(1c1bb0e0-ee02-473c-a263-cdfa973e52e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75d956895-66fp9" podUID="1c1bb0e0-ee02-473c-a263-cdfa973e52e6" Apr 13 20:13:25.352457 containerd[1502]: time="2026-04-13T20:13:25.352427731Z" level=error msg="Failed to destroy network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.352848 containerd[1502]: time="2026-04-13T20:13:25.352830859Z" level=error msg="encountered an error cleaning up failed sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.352925 containerd[1502]: time="2026-04-13T20:13:25.352911727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-c6ngr,Uid:487ea9ef-6892-48d7-a2fa-05a0f1fc06fa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.353127 kubelet[2564]: E0413 20:13:25.353109 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.353199 kubelet[2564]: E0413 20:13:25.353189 2564 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-c6ngr" Apr 13 20:13:25.353253 kubelet[2564]: E0413 20:13:25.353244 2564 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-c6ngr" Apr 13 20:13:25.353324 kubelet[2564]: E0413 20:13:25.353310 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-c6ngr_calico-system(487ea9ef-6892-48d7-a2fa-05a0f1fc06fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-c6ngr_calico-system(487ea9ef-6892-48d7-a2fa-05a0f1fc06fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-c6ngr" podUID="487ea9ef-6892-48d7-a2fa-05a0f1fc06fa" Apr 13 20:13:25.360944 containerd[1502]: time="2026-04-13T20:13:25.360896316Z" level=error msg="Failed to destroy network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.361258 containerd[1502]: time="2026-04-13T20:13:25.361233103Z" level=error msg="encountered an error cleaning up failed sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.361301 containerd[1502]: time="2026-04-13T20:13:25.361279514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c74bf58b8-pnfc5,Uid:a4e90b81-acc5-4fe5-b623-c177b554394d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.361487 kubelet[2564]: E0413 20:13:25.361466 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.361554 kubelet[2564]: E0413 20:13:25.361540 2564 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c74bf58b8-pnfc5" Apr 13 20:13:25.361697 kubelet[2564]: E0413 20:13:25.361683 2564 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c74bf58b8-pnfc5" Apr 13 20:13:25.361797 kubelet[2564]: E0413 20:13:25.361782 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c74bf58b8-pnfc5_calico-system(a4e90b81-acc5-4fe5-b623-c177b554394d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c74bf58b8-pnfc5_calico-system(a4e90b81-acc5-4fe5-b623-c177b554394d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c74bf58b8-pnfc5" podUID="a4e90b81-acc5-4fe5-b623-c177b554394d" Apr 13 20:13:25.361878 containerd[1502]: time="2026-04-13T20:13:25.361810020Z" level=error msg="Failed to destroy network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.362709 containerd[1502]: time="2026-04-13T20:13:25.362532983Z" level=error msg="encountered an error cleaning up failed sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.362709 containerd[1502]: time="2026-04-13T20:13:25.362562951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f5f8f9646-r8pmz,Uid:cd1afe66-21c5-4bb5-bb7b-16dd69973766,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.362849 kubelet[2564]: E0413 20:13:25.362721 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.362948 kubelet[2564]: E0413 20:13:25.362934 2564 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f5f8f9646-r8pmz" Apr 13 20:13:25.363002 kubelet[2564]: E0413 20:13:25.362991 2564 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f5f8f9646-r8pmz" Apr 13 20:13:25.363125 kubelet[2564]: E0413 20:13:25.363103 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f5f8f9646-r8pmz_calico-system(cd1afe66-21c5-4bb5-bb7b-16dd69973766)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f5f8f9646-r8pmz_calico-system(cd1afe66-21c5-4bb5-bb7b-16dd69973766)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f5f8f9646-r8pmz" podUID="cd1afe66-21c5-4bb5-bb7b-16dd69973766" Apr 13 20:13:25.385442 kubelet[2564]: I0413 20:13:25.384367 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:25.385552 containerd[1502]: time="2026-04-13T20:13:25.384891681Z" level=info msg="StopPodSandbox for \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\"" Apr 13 20:13:25.385552 containerd[1502]: time="2026-04-13T20:13:25.385048539Z" level=info msg="Ensure that sandbox 5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738 in task-service has been cleanup successfully" Apr 13 20:13:25.387080 containerd[1502]: time="2026-04-13T20:13:25.386830239Z" level=error msg="Failed to destroy network for sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.388530 containerd[1502]: time="2026-04-13T20:13:25.387134954Z" level=error msg="encountered an error cleaning up failed sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.388530 containerd[1502]: time="2026-04-13T20:13:25.387165604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvr8v,Uid:484da9bd-407d-408c-b0d2-a512d2d9a654,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.388530 containerd[1502]: time="2026-04-13T20:13:25.387715789Z" level=info msg="StopPodSandbox for \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\"" Apr 13 20:13:25.388530 containerd[1502]: time="2026-04-13T20:13:25.388192450Z" level=info msg="Ensure that sandbox 27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0 in task-service has been cleanup successfully" Apr 13 20:13:25.388639 kubelet[2564]: E0413 20:13:25.387262 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.388639 kubelet[2564]: E0413 20:13:25.387302 2564 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hvr8v" Apr 13 20:13:25.388639 kubelet[2564]: E0413 20:13:25.387316 2564 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hvr8v" Apr 13 20:13:25.388731 kubelet[2564]: E0413 20:13:25.387344 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hvr8v_calico-system(484da9bd-407d-408c-b0d2-a512d2d9a654)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hvr8v_calico-system(484da9bd-407d-408c-b0d2-a512d2d9a654)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hvr8v" podUID="484da9bd-407d-408c-b0d2-a512d2d9a654" Apr 13 20:13:25.388731 kubelet[2564]: I0413 20:13:25.387375 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:25.393238 kubelet[2564]: I0413 20:13:25.393214 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:25.397805 containerd[1502]: time="2026-04-13T20:13:25.396446565Z" level=info msg="StopPodSandbox for \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\"" Apr 13 20:13:25.397805 containerd[1502]: time="2026-04-13T20:13:25.396639592Z" level=info msg="Ensure that sandbox 628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c in task-service has been cleanup successfully" Apr 13 20:13:25.398867 kubelet[2564]: I0413 20:13:25.398198 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:25.398926 containerd[1502]: time="2026-04-13T20:13:25.398615412Z" level=info msg="StopPodSandbox for \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\"" Apr 13 20:13:25.398926 containerd[1502]: time="2026-04-13T20:13:25.398718533Z" level=info msg="Ensure that sandbox 561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce in task-service has been cleanup successfully" Apr 13 20:13:25.406787 kubelet[2564]: I0413 20:13:25.406551 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:25.409852 containerd[1502]: time="2026-04-13T20:13:25.409054997Z" level=info msg="StopPodSandbox for \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\"" Apr 13 20:13:25.409852 containerd[1502]: time="2026-04-13T20:13:25.409189029Z" level=info msg="Ensure that sandbox 4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4 in task-service has been cleanup successfully" Apr 13 20:13:25.414265 containerd[1502]: time="2026-04-13T20:13:25.414204256Z" level=info msg="CreateContainer within sandbox \"2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:13:25.415609 kubelet[2564]: I0413 20:13:25.415224 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:25.419107 containerd[1502]: time="2026-04-13T20:13:25.419077516Z" level=info msg="StopPodSandbox for \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\"" Apr 13 20:13:25.419940 containerd[1502]: time="2026-04-13T20:13:25.419912027Z" level=info msg="Ensure that sandbox eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e in task-service has been cleanup successfully" Apr 13 20:13:25.428443 kubelet[2564]: I0413 20:13:25.428402 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:25.430222 containerd[1502]: time="2026-04-13T20:13:25.430195726Z" level=info msg="StopPodSandbox for \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\"" Apr 13 20:13:25.430632 containerd[1502]: time="2026-04-13T20:13:25.430616879Z" level=info msg="Ensure that sandbox 4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859 in task-service has been cleanup successfully" Apr 13 20:13:25.450759 containerd[1502]: time="2026-04-13T20:13:25.450687659Z" level=error msg="StopPodSandbox for \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\" failed" error="failed to destroy network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.451183 kubelet[2564]: E0413 20:13:25.451150 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:25.451314 kubelet[2564]: E0413 20:13:25.451284 2564 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738"} Apr 13 20:13:25.451385 kubelet[2564]: E0413 20:13:25.451375 2564 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"650a7fe1-f630-4ffa-8ebb-7c8ab54e8781\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:13:25.451595 kubelet[2564]: E0413 20:13:25.451494 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"650a7fe1-f630-4ffa-8ebb-7c8ab54e8781\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fvx68" podUID="650a7fe1-f630-4ffa-8ebb-7c8ab54e8781" Apr 13 20:13:25.454430 containerd[1502]: time="2026-04-13T20:13:25.454389801Z" level=info msg="CreateContainer within sandbox \"2d3a3522096ac245fd4afc6ff7e00e1106cca85712c22e267d5c7bba528cb7fd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8fa110030faa461b05db2b0c1d6cd241cb0cb2261f93bb365efa46f1af269d39\"" Apr 13 20:13:25.456265 containerd[1502]: time="2026-04-13T20:13:25.456246719Z" level=info msg="StartContainer for \"8fa110030faa461b05db2b0c1d6cd241cb0cb2261f93bb365efa46f1af269d39\"" Apr 13 20:13:25.476529 containerd[1502]: time="2026-04-13T20:13:25.476411513Z" level=error msg="StopPodSandbox for \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\" failed" error="failed to destroy network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.477165 kubelet[2564]: E0413 20:13:25.477024 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:25.477165 kubelet[2564]: E0413 20:13:25.477073 2564 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859"} Apr 13 20:13:25.477165 kubelet[2564]: E0413 20:13:25.477109 2564 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5925fe08-2006-453b-ae7e-b5697562e697\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:13:25.477165 kubelet[2564]: E0413 20:13:25.477136 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5925fe08-2006-453b-ae7e-b5697562e697\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75d956895-42kqd" podUID="5925fe08-2006-453b-ae7e-b5697562e697" Apr 13 20:13:25.494475 containerd[1502]: time="2026-04-13T20:13:25.493923383Z" level=error msg="StopPodSandbox for \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\" failed" error="failed to destroy network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.495022 kubelet[2564]: E0413 20:13:25.494350 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:25.495022 kubelet[2564]: E0413 20:13:25.494396 2564 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0"} Apr 13 20:13:25.495022 kubelet[2564]: E0413 20:13:25.494423 2564 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c1bb0e0-ee02-473c-a263-cdfa973e52e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:13:25.495022 kubelet[2564]: E0413 20:13:25.494442 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c1bb0e0-ee02-473c-a263-cdfa973e52e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75d956895-66fp9" podUID="1c1bb0e0-ee02-473c-a263-cdfa973e52e6" Apr 13 20:13:25.510971 systemd[1]: Started cri-containerd-8fa110030faa461b05db2b0c1d6cd241cb0cb2261f93bb365efa46f1af269d39.scope - libcontainer container 8fa110030faa461b05db2b0c1d6cd241cb0cb2261f93bb365efa46f1af269d39. Apr 13 20:13:25.522777 containerd[1502]: time="2026-04-13T20:13:25.521273173Z" level=error msg="StopPodSandbox for \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\" failed" error="failed to destroy network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.522869 kubelet[2564]: E0413 20:13:25.521827 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:25.522869 kubelet[2564]: E0413 20:13:25.521873 2564 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c"} Apr 13 20:13:25.522869 kubelet[2564]: E0413 20:13:25.521898 2564 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbf82d0e-deb2-40ec-a573-b417e42188bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:13:25.522869 kubelet[2564]: E0413 20:13:25.521916 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbf82d0e-deb2-40ec-a573-b417e42188bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dr5n9" podUID="bbf82d0e-deb2-40ec-a573-b417e42188bc" Apr 13 20:13:25.525041 containerd[1502]: time="2026-04-13T20:13:25.523470065Z" level=error msg="StopPodSandbox for \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\" failed" error="failed to destroy network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.525107 kubelet[2564]: E0413 20:13:25.523832 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:25.525107 kubelet[2564]: E0413 20:13:25.523856 2564 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e"} Apr 13 20:13:25.525107 kubelet[2564]: E0413 20:13:25.523873 2564 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:13:25.525107 kubelet[2564]: E0413 20:13:25.523890 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-c6ngr" podUID="487ea9ef-6892-48d7-a2fa-05a0f1fc06fa" Apr 13 20:13:25.527801 containerd[1502]: time="2026-04-13T20:13:25.527485178Z" level=error msg="StopPodSandbox for \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\" failed" error="failed to destroy network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.527859 kubelet[2564]: E0413 20:13:25.527603 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:25.527859 kubelet[2564]: E0413 20:13:25.527626 2564 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4"} Apr 13 20:13:25.527859 kubelet[2564]: E0413 20:13:25.527702 2564 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:13:25.527859 kubelet[2564]: E0413 20:13:25.527794 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f5f8f9646-r8pmz" podUID="cd1afe66-21c5-4bb5-bb7b-16dd69973766" Apr 13 20:13:25.529806 containerd[1502]: time="2026-04-13T20:13:25.529645048Z" level=error msg="StopPodSandbox for \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\" failed" error="failed to destroy network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:13:25.529863 kubelet[2564]: E0413 20:13:25.529759 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:25.529863 kubelet[2564]: E0413 20:13:25.529800 2564 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce"} Apr 13 20:13:25.529863 kubelet[2564]: E0413 20:13:25.529816 2564 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4e90b81-acc5-4fe5-b623-c177b554394d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:13:25.529863 kubelet[2564]: E0413 20:13:25.529832 2564 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4e90b81-acc5-4fe5-b623-c177b554394d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c74bf58b8-pnfc5" podUID="a4e90b81-acc5-4fe5-b623-c177b554394d" Apr 13 20:13:25.550204 containerd[1502]: time="2026-04-13T20:13:25.550163602Z" level=info msg="StartContainer for \"8fa110030faa461b05db2b0c1d6cd241cb0cb2261f93bb365efa46f1af269d39\" returns successfully" Apr 13 20:13:26.120333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e-shm.mount: Deactivated successfully. Apr 13 20:13:26.120524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738-shm.mount: Deactivated successfully. Apr 13 20:13:26.120663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859-shm.mount: Deactivated successfully. Apr 13 20:13:26.120848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0-shm.mount: Deactivated successfully. Apr 13 20:13:26.120976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c-shm.mount: Deactivated successfully. Apr 13 20:13:26.444649 kubelet[2564]: I0413 20:13:26.443927 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:26.451489 containerd[1502]: time="2026-04-13T20:13:26.449291291Z" level=info msg="StopPodSandbox for \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\"" Apr 13 20:13:26.454805 containerd[1502]: time="2026-04-13T20:13:26.453116882Z" level=info msg="StopPodSandbox for \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\"" Apr 13 20:13:26.454805 containerd[1502]: time="2026-04-13T20:13:26.453390880Z" level=info msg="Ensure that sandbox eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd in task-service has been cleanup successfully" Apr 13 20:13:26.469133 kubelet[2564]: I0413 20:13:26.468110 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k7v7h" podStartSLOduration=3.527093512 podStartE2EDuration="17.468092328s" podCreationTimestamp="2026-04-13 20:13:09 +0000 UTC" firstStartedPulling="2026-04-13 20:13:10.155845263 +0000 UTC m=+17.975261970" lastFinishedPulling="2026-04-13 20:13:24.096844069 +0000 UTC m=+31.916260786" observedRunningTime="2026-04-13 20:13:26.461965111 +0000 UTC m=+34.281381847" watchObservedRunningTime="2026-04-13 20:13:26.468092328 +0000 UTC m=+34.287509054" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.525 [INFO][3822] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.525 [INFO][3822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" iface="eth0" netns="/var/run/netns/cni-48c701c8-c5e6-0cc0-ef69-fbd1ee1faf9d" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.526 [INFO][3822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" iface="eth0" netns="/var/run/netns/cni-48c701c8-c5e6-0cc0-ef69-fbd1ee1faf9d" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.526 [INFO][3822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" iface="eth0" netns="/var/run/netns/cni-48c701c8-c5e6-0cc0-ef69-fbd1ee1faf9d" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.526 [INFO][3822] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.526 [INFO][3822] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.554 [INFO][3864] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" HandleID="k8s-pod-network.4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.554 [INFO][3864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.554 [INFO][3864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.558 [WARNING][3864] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" HandleID="k8s-pod-network.4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.558 [INFO][3864] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" HandleID="k8s-pod-network.4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.559 [INFO][3864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:26.564834 containerd[1502]: 2026-04-13 20:13:26.562 [INFO][3822] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:26.568940 containerd[1502]: time="2026-04-13T20:13:26.567846727Z" level=info msg="TearDown network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\" successfully" Apr 13 20:13:26.568940 containerd[1502]: time="2026-04-13T20:13:26.567901331Z" level=info msg="StopPodSandbox for \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\" returns successfully" Apr 13 20:13:26.568419 systemd[1]: run-netns-cni\x2d48c701c8\x2dc5e6\x2d0cc0\x2def69\x2dfbd1ee1faf9d.mount: Deactivated successfully. Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.520 [INFO][3840] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.520 [INFO][3840] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" iface="eth0" netns="/var/run/netns/cni-81b85fd1-ef2a-9b59-8c56-856fe9e89d47" Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.520 [INFO][3840] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" iface="eth0" netns="/var/run/netns/cni-81b85fd1-ef2a-9b59-8c56-856fe9e89d47" Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.521 [INFO][3840] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" iface="eth0" netns="/var/run/netns/cni-81b85fd1-ef2a-9b59-8c56-856fe9e89d47" Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.521 [INFO][3840] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.521 [INFO][3840] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.555 [INFO][3861] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" HandleID="k8s-pod-network.eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.555 [INFO][3861] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.559 [INFO][3861] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.563 [WARNING][3861] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" HandleID="k8s-pod-network.eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.563 [INFO][3861] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" HandleID="k8s-pod-network.eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.564 [INFO][3861] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:26.570401 containerd[1502]: 2026-04-13 20:13:26.566 [INFO][3840] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:26.570401 containerd[1502]: time="2026-04-13T20:13:26.570319285Z" level=info msg="TearDown network for sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\" successfully" Apr 13 20:13:26.570401 containerd[1502]: time="2026-04-13T20:13:26.570333960Z" level=info msg="StopPodSandbox for \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\" returns successfully" Apr 13 20:13:26.572990 containerd[1502]: time="2026-04-13T20:13:26.572919080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvr8v,Uid:484da9bd-407d-408c-b0d2-a512d2d9a654,Namespace:calico-system,Attempt:1,}" Apr 13 20:13:26.573891 systemd[1]: run-netns-cni\x2d81b85fd1\x2def2a\x2d9b59\x2d8c56\x2d856fe9e89d47.mount: Deactivated successfully. Apr 13 20:13:26.614563 kubelet[2564]: I0413 20:13:26.614448 2564 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cd1afe66-21c5-4bb5-bb7b-16dd69973766-whisker-backend-key-pair\") pod \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\" (UID: \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\") " Apr 13 20:13:26.614563 kubelet[2564]: I0413 20:13:26.614494 2564 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd1afe66-21c5-4bb5-bb7b-16dd69973766-whisker-ca-bundle\") pod \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\" (UID: \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\") " Apr 13 20:13:26.614563 kubelet[2564]: I0413 20:13:26.614517 2564 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk9cl\" (UniqueName: \"kubernetes.io/projected/cd1afe66-21c5-4bb5-bb7b-16dd69973766-kube-api-access-vk9cl\") pod \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\" (UID: \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\") " Apr 13 20:13:26.614563 kubelet[2564]: I0413 20:13:26.614544 2564 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cd1afe66-21c5-4bb5-bb7b-16dd69973766-nginx-config\") pod \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\" (UID: \"cd1afe66-21c5-4bb5-bb7b-16dd69973766\") " Apr 13 20:13:26.615253 kubelet[2564]: I0413 20:13:26.615142 2564 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1afe66-21c5-4bb5-bb7b-16dd69973766-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "cd1afe66-21c5-4bb5-bb7b-16dd69973766" (UID: "cd1afe66-21c5-4bb5-bb7b-16dd69973766"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:13:26.615618 kubelet[2564]: I0413 20:13:26.615598 2564 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1afe66-21c5-4bb5-bb7b-16dd69973766-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cd1afe66-21c5-4bb5-bb7b-16dd69973766" (UID: "cd1afe66-21c5-4bb5-bb7b-16dd69973766"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:13:26.618825 kubelet[2564]: I0413 20:13:26.618792 2564 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1afe66-21c5-4bb5-bb7b-16dd69973766-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cd1afe66-21c5-4bb5-bb7b-16dd69973766" (UID: "cd1afe66-21c5-4bb5-bb7b-16dd69973766"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:13:26.620300 kubelet[2564]: I0413 20:13:26.620278 2564 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1afe66-21c5-4bb5-bb7b-16dd69973766-kube-api-access-vk9cl" (OuterVolumeSpecName: "kube-api-access-vk9cl") pod "cd1afe66-21c5-4bb5-bb7b-16dd69973766" (UID: "cd1afe66-21c5-4bb5-bb7b-16dd69973766"). InnerVolumeSpecName "kube-api-access-vk9cl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:13:26.671281 systemd-networkd[1414]: cali5576905c19d: Link UP Apr 13 20:13:26.673860 systemd-networkd[1414]: cali5576905c19d: Gained carrier Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.601 [ERROR][3877] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.612 [INFO][3877] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0 csi-node-driver- calico-system 484da9bd-407d-408c-b0d2-a512d2d9a654 872 0 2026-04-13 20:13:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-c-b0ece174b2 csi-node-driver-hvr8v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5576905c19d [] [] }} ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Namespace="calico-system" Pod="csi-node-driver-hvr8v" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.612 [INFO][3877] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Namespace="calico-system" Pod="csi-node-driver-hvr8v" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.635 [INFO][3891] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" HandleID="k8s-pod-network.6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.641 [INFO][3891] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" HandleID="k8s-pod-network.6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-c-b0ece174b2", "pod":"csi-node-driver-hvr8v", "timestamp":"2026-04-13 20:13:26.635347557 +0000 UTC"}, Hostname:"ci-4081-3-7-c-b0ece174b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004c3ce0)} Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.641 [INFO][3891] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.641 [INFO][3891] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.641 [INFO][3891] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-c-b0ece174b2' Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.643 [INFO][3891] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.647 [INFO][3891] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.650 [INFO][3891] ipam/ipam.go 526: Trying affinity for 192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.652 [INFO][3891] ipam/ipam.go 160: Attempting to load block cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.653 [INFO][3891] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.653 [INFO][3891] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.654 [INFO][3891] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78 Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.659 [INFO][3891] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.662 [INFO][3891] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.19.193/26] block=192.168.19.192/26 handle="k8s-pod-network.6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.662 [INFO][3891] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.19.193/26] handle="k8s-pod-network.6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.662 [INFO][3891] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:26.693790 containerd[1502]: 2026-04-13 20:13:26.662 [INFO][3891] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.19.193/26] IPv6=[] ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" HandleID="k8s-pod-network.6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.694223 containerd[1502]: 2026-04-13 20:13:26.665 [INFO][3877] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Namespace="calico-system" Pod="csi-node-driver-hvr8v" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"484da9bd-407d-408c-b0d2-a512d2d9a654", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"", Pod:"csi-node-driver-hvr8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5576905c19d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:26.694223 containerd[1502]: 2026-04-13 20:13:26.665 [INFO][3877] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.193/32] ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Namespace="calico-system" Pod="csi-node-driver-hvr8v" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.694223 containerd[1502]: 2026-04-13 20:13:26.665 [INFO][3877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5576905c19d ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Namespace="calico-system" Pod="csi-node-driver-hvr8v" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.694223 containerd[1502]: 2026-04-13 20:13:26.674 [INFO][3877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Namespace="calico-system" Pod="csi-node-driver-hvr8v" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.694223 containerd[1502]: 2026-04-13 20:13:26.674 [INFO][3877] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Namespace="calico-system" Pod="csi-node-driver-hvr8v" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"484da9bd-407d-408c-b0d2-a512d2d9a654", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78", Pod:"csi-node-driver-hvr8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5576905c19d", MAC:"52:cc:bb:e5:bf:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:26.694223 containerd[1502]: 2026-04-13 20:13:26.685 [INFO][3877] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78" Namespace="calico-system" Pod="csi-node-driver-hvr8v" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:26.708882 containerd[1502]: time="2026-04-13T20:13:26.708641702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:26.708882 containerd[1502]: time="2026-04-13T20:13:26.708688472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:26.708882 containerd[1502]: time="2026-04-13T20:13:26.708699260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:26.709803 containerd[1502]: time="2026-04-13T20:13:26.709451457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:26.714843 kubelet[2564]: I0413 20:13:26.714819 2564 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd1afe66-21c5-4bb5-bb7b-16dd69973766-whisker-ca-bundle\") on node \"ci-4081-3-7-c-b0ece174b2\" DevicePath \"\"" Apr 13 20:13:26.714930 kubelet[2564]: I0413 20:13:26.714922 2564 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vk9cl\" (UniqueName: \"kubernetes.io/projected/cd1afe66-21c5-4bb5-bb7b-16dd69973766-kube-api-access-vk9cl\") on node \"ci-4081-3-7-c-b0ece174b2\" DevicePath \"\"" Apr 13 20:13:26.714998 kubelet[2564]: I0413 20:13:26.714991 2564 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cd1afe66-21c5-4bb5-bb7b-16dd69973766-nginx-config\") on node \"ci-4081-3-7-c-b0ece174b2\" DevicePath \"\"" Apr 13 20:13:26.715043 kubelet[2564]: I0413 20:13:26.715035 2564 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cd1afe66-21c5-4bb5-bb7b-16dd69973766-whisker-backend-key-pair\") on node \"ci-4081-3-7-c-b0ece174b2\" DevicePath \"\"" Apr 13 20:13:26.727904 systemd[1]: Started cri-containerd-6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78.scope - libcontainer container 6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78. Apr 13 20:13:26.749896 containerd[1502]: time="2026-04-13T20:13:26.749857587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hvr8v,Uid:484da9bd-407d-408c-b0d2-a512d2d9a654,Namespace:calico-system,Attempt:1,} returns sandbox id \"6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78\"" Apr 13 20:13:26.751393 containerd[1502]: time="2026-04-13T20:13:26.751188864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:13:27.115797 systemd[1]: var-lib-kubelet-pods-cd1afe66\x2d21c5\x2d4bb5\x2dbb7b\x2d16dd69973766-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvk9cl.mount: Deactivated successfully. Apr 13 20:13:27.115983 systemd[1]: var-lib-kubelet-pods-cd1afe66\x2d21c5\x2d4bb5\x2dbb7b\x2d16dd69973766-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:13:27.458738 systemd[1]: Removed slice kubepods-besteffort-podcd1afe66_21c5_4bb5_bb7b_16dd69973766.slice - libcontainer container kubepods-besteffort-podcd1afe66_21c5_4bb5_bb7b_16dd69973766.slice. Apr 13 20:13:27.582063 systemd[1]: Created slice kubepods-besteffort-pod2dc681cf_c45b_4e28_8e42_615625471080.slice - libcontainer container kubepods-besteffort-pod2dc681cf_c45b_4e28_8e42_615625471080.slice. Apr 13 20:13:27.623443 kubelet[2564]: I0413 20:13:27.623373 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2dc681cf-c45b-4e28-8e42-615625471080-nginx-config\") pod \"whisker-7b9d6b4887-w4tvz\" (UID: \"2dc681cf-c45b-4e28-8e42-615625471080\") " pod="calico-system/whisker-7b9d6b4887-w4tvz" Apr 13 20:13:27.623443 kubelet[2564]: I0413 20:13:27.623441 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzbp\" (UniqueName: \"kubernetes.io/projected/2dc681cf-c45b-4e28-8e42-615625471080-kube-api-access-xvzbp\") pod \"whisker-7b9d6b4887-w4tvz\" (UID: \"2dc681cf-c45b-4e28-8e42-615625471080\") " pod="calico-system/whisker-7b9d6b4887-w4tvz" Apr 13 20:13:27.624001 kubelet[2564]: I0413 20:13:27.623458 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dc681cf-c45b-4e28-8e42-615625471080-whisker-ca-bundle\") pod \"whisker-7b9d6b4887-w4tvz\" (UID: \"2dc681cf-c45b-4e28-8e42-615625471080\") " pod="calico-system/whisker-7b9d6b4887-w4tvz" Apr 13 20:13:27.624001 kubelet[2564]: I0413 20:13:27.623480 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2dc681cf-c45b-4e28-8e42-615625471080-whisker-backend-key-pair\") pod \"whisker-7b9d6b4887-w4tvz\" (UID: \"2dc681cf-c45b-4e28-8e42-615625471080\") " pod="calico-system/whisker-7b9d6b4887-w4tvz" Apr 13 20:13:27.886972 containerd[1502]: time="2026-04-13T20:13:27.886921395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9d6b4887-w4tvz,Uid:2dc681cf-c45b-4e28-8e42-615625471080,Namespace:calico-system,Attempt:0,}" Apr 13 20:13:28.030867 systemd-networkd[1414]: cali330faa800e5: Link UP Apr 13 20:13:28.031080 systemd-networkd[1414]: cali330faa800e5: Gained carrier Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.925 [ERROR][4068] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.937 [INFO][4068] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0 whisker-7b9d6b4887- calico-system 2dc681cf-c45b-4e28-8e42-615625471080 895 0 2026-04-13 20:13:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b9d6b4887 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-c-b0ece174b2 whisker-7b9d6b4887-w4tvz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali330faa800e5 [] [] }} ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Namespace="calico-system" Pod="whisker-7b9d6b4887-w4tvz" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.937 [INFO][4068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Namespace="calico-system" Pod="whisker-7b9d6b4887-w4tvz" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.980 [INFO][4082] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" HandleID="k8s-pod-network.c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.986 [INFO][4082] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" HandleID="k8s-pod-network.c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003801c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-c-b0ece174b2", "pod":"whisker-7b9d6b4887-w4tvz", "timestamp":"2026-04-13 20:13:27.980015398 +0000 UTC"}, Hostname:"ci-4081-3-7-c-b0ece174b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000292c60)} Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.986 [INFO][4082] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.986 [INFO][4082] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.986 [INFO][4082] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-c-b0ece174b2' Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.989 [INFO][4082] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:27.994 [INFO][4082] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.001 [INFO][4082] ipam/ipam.go 526: Trying affinity for 192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.003 [INFO][4082] ipam/ipam.go 160: Attempting to load block cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.005 [INFO][4082] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.006 [INFO][4082] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.009 [INFO][4082] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.012 [INFO][4082] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.018 [INFO][4082] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.19.194/26] block=192.168.19.192/26 handle="k8s-pod-network.c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.018 [INFO][4082] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.19.194/26] handle="k8s-pod-network.c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.018 [INFO][4082] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:28.050014 containerd[1502]: 2026-04-13 20:13:28.018 [INFO][4082] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.19.194/26] IPv6=[] ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" HandleID="k8s-pod-network.c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" Apr 13 20:13:28.050507 containerd[1502]: 2026-04-13 20:13:28.023 [INFO][4068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Namespace="calico-system" Pod="whisker-7b9d6b4887-w4tvz" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0", GenerateName:"whisker-7b9d6b4887-", Namespace:"calico-system", SelfLink:"", UID:"2dc681cf-c45b-4e28-8e42-615625471080", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b9d6b4887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"", Pod:"whisker-7b9d6b4887-w4tvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali330faa800e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:28.050507 containerd[1502]: 2026-04-13 20:13:28.023 [INFO][4068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.194/32] ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Namespace="calico-system" Pod="whisker-7b9d6b4887-w4tvz" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" Apr 13 20:13:28.050507 containerd[1502]: 2026-04-13 20:13:28.023 [INFO][4068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali330faa800e5 ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Namespace="calico-system" Pod="whisker-7b9d6b4887-w4tvz" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" Apr 13 20:13:28.050507 containerd[1502]: 2026-04-13 20:13:28.030 [INFO][4068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Namespace="calico-system" Pod="whisker-7b9d6b4887-w4tvz" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" Apr 13 20:13:28.050507 containerd[1502]: 2026-04-13 20:13:28.030 [INFO][4068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Namespace="calico-system" Pod="whisker-7b9d6b4887-w4tvz" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0", GenerateName:"whisker-7b9d6b4887-", Namespace:"calico-system", SelfLink:"", UID:"2dc681cf-c45b-4e28-8e42-615625471080", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b9d6b4887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f", Pod:"whisker-7b9d6b4887-w4tvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali330faa800e5", MAC:"9a:fb:f1:07:90:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:28.050507 containerd[1502]: 2026-04-13 20:13:28.042 [INFO][4068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f" Namespace="calico-system" Pod="whisker-7b9d6b4887-w4tvz" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--7b9d6b4887--w4tvz-eth0" Apr 13 20:13:28.076707 containerd[1502]: time="2026-04-13T20:13:28.076632460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:28.076847 containerd[1502]: time="2026-04-13T20:13:28.076714658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:28.077437 containerd[1502]: time="2026-04-13T20:13:28.077190291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:28.077437 containerd[1502]: time="2026-04-13T20:13:28.077268303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:28.097382 systemd[1]: Started cri-containerd-c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f.scope - libcontainer container c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f. Apr 13 20:13:28.137203 containerd[1502]: time="2026-04-13T20:13:28.137095708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9d6b4887-w4tvz,Uid:2dc681cf-c45b-4e28-8e42-615625471080,Namespace:calico-system,Attempt:0,} returns sandbox id \"c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f\"" Apr 13 20:13:28.263158 kubelet[2564]: I0413 20:13:28.263119 2564 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1afe66-21c5-4bb5-bb7b-16dd69973766" path="/var/lib/kubelet/pods/cd1afe66-21c5-4bb5-bb7b-16dd69973766/volumes" Apr 13 20:13:28.569034 containerd[1502]: time="2026-04-13T20:13:28.568925044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:28.569667 containerd[1502]: time="2026-04-13T20:13:28.569484266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:13:28.571085 containerd[1502]: time="2026-04-13T20:13:28.570347838Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:28.572049 containerd[1502]: time="2026-04-13T20:13:28.572017404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:28.572501 containerd[1502]: time="2026-04-13T20:13:28.572469257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.82097885s" Apr 13 20:13:28.572533 containerd[1502]: time="2026-04-13T20:13:28.572502500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:13:28.573478 containerd[1502]: time="2026-04-13T20:13:28.573345904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:13:28.576242 containerd[1502]: time="2026-04-13T20:13:28.576205320Z" level=info msg="CreateContainer within sandbox \"6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:13:28.598719 containerd[1502]: time="2026-04-13T20:13:28.598673960Z" level=info msg="CreateContainer within sandbox \"6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cc4301d6f0b3c74335b30211a37b7038d37519b8b57666589902dc9605474ac8\"" Apr 13 20:13:28.600529 containerd[1502]: time="2026-04-13T20:13:28.600510056Z" level=info msg="StartContainer for \"cc4301d6f0b3c74335b30211a37b7038d37519b8b57666589902dc9605474ac8\"" Apr 13 20:13:28.630871 systemd[1]: Started cri-containerd-cc4301d6f0b3c74335b30211a37b7038d37519b8b57666589902dc9605474ac8.scope - libcontainer container cc4301d6f0b3c74335b30211a37b7038d37519b8b57666589902dc9605474ac8. Apr 13 20:13:28.657164 containerd[1502]: time="2026-04-13T20:13:28.657107490Z" level=info msg="StartContainer for \"cc4301d6f0b3c74335b30211a37b7038d37519b8b57666589902dc9605474ac8\" returns successfully" Apr 13 20:13:28.671940 systemd-networkd[1414]: cali5576905c19d: Gained IPv6LL Apr 13 20:13:29.696062 systemd-networkd[1414]: cali330faa800e5: Gained IPv6LL Apr 13 20:13:30.395672 containerd[1502]: time="2026-04-13T20:13:30.395628159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:30.396772 containerd[1502]: time="2026-04-13T20:13:30.396667121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:13:30.397775 containerd[1502]: time="2026-04-13T20:13:30.397571418Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:30.402037 containerd[1502]: time="2026-04-13T20:13:30.402018060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:30.402890 containerd[1502]: time="2026-04-13T20:13:30.402870465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.82906994s" Apr 13 20:13:30.402957 containerd[1502]: time="2026-04-13T20:13:30.402946281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:13:30.404412 containerd[1502]: time="2026-04-13T20:13:30.404389960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:13:30.406961 containerd[1502]: time="2026-04-13T20:13:30.406886395Z" level=info msg="CreateContainer within sandbox \"c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:13:30.423997 containerd[1502]: time="2026-04-13T20:13:30.423964520Z" level=info msg="CreateContainer within sandbox \"c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"67c5f64d129908ae316481820d6e8d2d8c07bb4448933c8aea50b0c00f709193\"" Apr 13 20:13:30.426141 containerd[1502]: time="2026-04-13T20:13:30.425329629Z" level=info msg="StartContainer for \"67c5f64d129908ae316481820d6e8d2d8c07bb4448933c8aea50b0c00f709193\"" Apr 13 20:13:30.450849 systemd[1]: Started cri-containerd-67c5f64d129908ae316481820d6e8d2d8c07bb4448933c8aea50b0c00f709193.scope - libcontainer container 67c5f64d129908ae316481820d6e8d2d8c07bb4448933c8aea50b0c00f709193. Apr 13 20:13:30.485951 containerd[1502]: time="2026-04-13T20:13:30.485828493Z" level=info msg="StartContainer for \"67c5f64d129908ae316481820d6e8d2d8c07bb4448933c8aea50b0c00f709193\" returns successfully" Apr 13 20:13:32.341965 containerd[1502]: time="2026-04-13T20:13:32.341917468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:32.343059 containerd[1502]: time="2026-04-13T20:13:32.342940253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:13:32.344810 containerd[1502]: time="2026-04-13T20:13:32.343798393Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:32.345740 containerd[1502]: time="2026-04-13T20:13:32.345713235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:32.346262 containerd[1502]: time="2026-04-13T20:13:32.346233905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.941821694s" Apr 13 20:13:32.346336 containerd[1502]: time="2026-04-13T20:13:32.346324284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:13:32.347132 containerd[1502]: time="2026-04-13T20:13:32.347107977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:13:32.350220 containerd[1502]: time="2026-04-13T20:13:32.350188355Z" level=info msg="CreateContainer within sandbox \"6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:13:32.365271 containerd[1502]: time="2026-04-13T20:13:32.365228114Z" level=info msg="CreateContainer within sandbox \"6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a85ed7ff2afcb9cdca9c5b6b65ebc0c265b7b7a93d99ab97fd100ec08d3021ca\"" Apr 13 20:13:32.366798 containerd[1502]: time="2026-04-13T20:13:32.365740267Z" level=info msg="StartContainer for \"a85ed7ff2afcb9cdca9c5b6b65ebc0c265b7b7a93d99ab97fd100ec08d3021ca\"" Apr 13 20:13:32.396876 systemd[1]: Started cri-containerd-a85ed7ff2afcb9cdca9c5b6b65ebc0c265b7b7a93d99ab97fd100ec08d3021ca.scope - libcontainer container a85ed7ff2afcb9cdca9c5b6b65ebc0c265b7b7a93d99ab97fd100ec08d3021ca. Apr 13 20:13:32.440863 containerd[1502]: time="2026-04-13T20:13:32.440823074Z" level=info msg="StartContainer for \"a85ed7ff2afcb9cdca9c5b6b65ebc0c265b7b7a93d99ab97fd100ec08d3021ca\" returns successfully" Apr 13 20:13:32.497433 kubelet[2564]: I0413 20:13:32.497359 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hvr8v" podStartSLOduration=17.901350894 podStartE2EDuration="23.497344545s" podCreationTimestamp="2026-04-13 20:13:09 +0000 UTC" firstStartedPulling="2026-04-13 20:13:26.751018412 +0000 UTC m=+34.570435108" lastFinishedPulling="2026-04-13 20:13:32.347012053 +0000 UTC m=+40.166428759" observedRunningTime="2026-04-13 20:13:32.497328293 +0000 UTC m=+40.316744999" watchObservedRunningTime="2026-04-13 20:13:32.497344545 +0000 UTC m=+40.316761251" Apr 13 20:13:33.337252 kubelet[2564]: I0413 20:13:33.337157 2564 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:13:33.337252 kubelet[2564]: I0413 20:13:33.337222 2564 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:13:34.289698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448907704.mount: Deactivated successfully. Apr 13 20:13:34.305717 containerd[1502]: time="2026-04-13T20:13:34.305669160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:34.309199 containerd[1502]: time="2026-04-13T20:13:34.309159565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:13:34.310358 containerd[1502]: time="2026-04-13T20:13:34.310335767Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:34.312082 containerd[1502]: time="2026-04-13T20:13:34.312066296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:34.312414 containerd[1502]: time="2026-04-13T20:13:34.312392678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.965256912s" Apr 13 20:13:34.312451 containerd[1502]: time="2026-04-13T20:13:34.312417463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:13:34.315310 containerd[1502]: time="2026-04-13T20:13:34.315287182Z" level=info msg="CreateContainer within sandbox \"c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:13:34.328798 containerd[1502]: time="2026-04-13T20:13:34.328762186Z" level=info msg="CreateContainer within sandbox \"c034cabca396e9aee904f3d4dc3284a6808ec6d94912acfdaea4cae27181220f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"61f3dd9839cfd079dd5968c12b70ed1d188b1081865ded349d550b64b6b659f7\"" Apr 13 20:13:34.329209 containerd[1502]: time="2026-04-13T20:13:34.329176380Z" level=info msg="StartContainer for \"61f3dd9839cfd079dd5968c12b70ed1d188b1081865ded349d550b64b6b659f7\"" Apr 13 20:13:34.363872 systemd[1]: Started cri-containerd-61f3dd9839cfd079dd5968c12b70ed1d188b1081865ded349d550b64b6b659f7.scope - libcontainer container 61f3dd9839cfd079dd5968c12b70ed1d188b1081865ded349d550b64b6b659f7. Apr 13 20:13:34.401069 containerd[1502]: time="2026-04-13T20:13:34.400955689Z" level=info msg="StartContainer for \"61f3dd9839cfd079dd5968c12b70ed1d188b1081865ded349d550b64b6b659f7\" returns successfully" Apr 13 20:13:34.515426 kubelet[2564]: I0413 20:13:34.515342 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b9d6b4887-w4tvz" podStartSLOduration=1.341939874 podStartE2EDuration="7.51532007s" podCreationTimestamp="2026-04-13 20:13:27 +0000 UTC" firstStartedPulling="2026-04-13 20:13:28.139683148 +0000 UTC m=+35.959099844" lastFinishedPulling="2026-04-13 20:13:34.313063334 +0000 UTC m=+42.132480040" observedRunningTime="2026-04-13 20:13:34.514055176 +0000 UTC m=+42.333471912" watchObservedRunningTime="2026-04-13 20:13:34.51532007 +0000 UTC m=+42.334736806" Apr 13 20:13:35.101271 systemd[1]: run-containerd-runc-k8s.io-61f3dd9839cfd079dd5968c12b70ed1d188b1081865ded349d550b64b6b659f7-runc.FwFFgu.mount: Deactivated successfully. Apr 13 20:13:36.266975 containerd[1502]: time="2026-04-13T20:13:36.265084512Z" level=info msg="StopPodSandbox for \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\"" Apr 13 20:13:36.266975 containerd[1502]: time="2026-04-13T20:13:36.266539118Z" level=info msg="StopPodSandbox for \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\"" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.344 [INFO][4505] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.344 [INFO][4505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" iface="eth0" netns="/var/run/netns/cni-63652fac-d28f-ded6-8d50-084cf6c55a3d" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.345 [INFO][4505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" iface="eth0" netns="/var/run/netns/cni-63652fac-d28f-ded6-8d50-084cf6c55a3d" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.347 [INFO][4505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" iface="eth0" netns="/var/run/netns/cni-63652fac-d28f-ded6-8d50-084cf6c55a3d" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.347 [INFO][4505] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.347 [INFO][4505] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.368 [INFO][4522] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" HandleID="k8s-pod-network.4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.368 [INFO][4522] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.369 [INFO][4522] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.373 [WARNING][4522] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" HandleID="k8s-pod-network.4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.373 [INFO][4522] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" HandleID="k8s-pod-network.4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.374 [INFO][4522] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:36.381633 containerd[1502]: 2026-04-13 20:13:36.376 [INFO][4505] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:36.381633 containerd[1502]: time="2026-04-13T20:13:36.379326486Z" level=info msg="TearDown network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\" successfully" Apr 13 20:13:36.381633 containerd[1502]: time="2026-04-13T20:13:36.379348046Z" level=info msg="StopPodSandbox for \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\" returns successfully" Apr 13 20:13:36.381984 containerd[1502]: time="2026-04-13T20:13:36.381811877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d956895-42kqd,Uid:5925fe08-2006-453b-ae7e-b5697562e697,Namespace:calico-system,Attempt:1,}" Apr 13 20:13:36.382678 systemd[1]: run-netns-cni\x2d63652fac\x2dd28f\x2dded6\x2d8d50\x2d084cf6c55a3d.mount: Deactivated successfully. Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.348 [INFO][4509] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.349 [INFO][4509] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" iface="eth0" netns="/var/run/netns/cni-fef989ea-a2e5-e55f-55e7-3626cca51f6e" Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.349 [INFO][4509] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" iface="eth0" netns="/var/run/netns/cni-fef989ea-a2e5-e55f-55e7-3626cca51f6e" Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.349 [INFO][4509] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" iface="eth0" netns="/var/run/netns/cni-fef989ea-a2e5-e55f-55e7-3626cca51f6e" Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.349 [INFO][4509] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.349 [INFO][4509] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.368 [INFO][4527] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" HandleID="k8s-pod-network.628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.368 [INFO][4527] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.374 [INFO][4527] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.378 [WARNING][4527] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" HandleID="k8s-pod-network.628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.378 [INFO][4527] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" HandleID="k8s-pod-network.628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.382 [INFO][4527] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:36.386919 containerd[1502]: 2026-04-13 20:13:36.384 [INFO][4509] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:36.387429 containerd[1502]: time="2026-04-13T20:13:36.387062415Z" level=info msg="TearDown network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\" successfully" Apr 13 20:13:36.387429 containerd[1502]: time="2026-04-13T20:13:36.387078642Z" level=info msg="StopPodSandbox for \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\" returns successfully" Apr 13 20:13:36.388019 containerd[1502]: time="2026-04-13T20:13:36.387688718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dr5n9,Uid:bbf82d0e-deb2-40ec-a573-b417e42188bc,Namespace:kube-system,Attempt:1,}" Apr 13 20:13:36.390222 systemd[1]: run-netns-cni\x2dfef989ea\x2da2e5\x2de55f\x2d55e7\x2d3626cca51f6e.mount: Deactivated successfully. Apr 13 20:13:36.493275 systemd-networkd[1414]: cali0aeda35f57f: Link UP Apr 13 20:13:36.493436 systemd-networkd[1414]: cali0aeda35f57f: Gained carrier Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.430 [ERROR][4538] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.439 [INFO][4538] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0 coredns-674b8bbfcf- kube-system bbf82d0e-deb2-40ec-a573-b417e42188bc 948 0 2026-04-13 20:12:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-c-b0ece174b2 coredns-674b8bbfcf-dr5n9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0aeda35f57f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Namespace="kube-system" Pod="coredns-674b8bbfcf-dr5n9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.439 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Namespace="kube-system" Pod="coredns-674b8bbfcf-dr5n9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.458 [INFO][4560] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" HandleID="k8s-pod-network.6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.464 [INFO][4560] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" HandleID="k8s-pod-network.6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-c-b0ece174b2", "pod":"coredns-674b8bbfcf-dr5n9", "timestamp":"2026-04-13 20:13:36.458718973 +0000 UTC"}, Hostname:"ci-4081-3-7-c-b0ece174b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.464 [INFO][4560] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.464 [INFO][4560] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.464 [INFO][4560] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-c-b0ece174b2' Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.467 [INFO][4560] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.470 [INFO][4560] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.474 [INFO][4560] ipam/ipam.go 526: Trying affinity for 192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.475 [INFO][4560] ipam/ipam.go 160: Attempting to load block cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.477 [INFO][4560] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.477 [INFO][4560] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.478 [INFO][4560] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79 Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.482 [INFO][4560] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.488 [INFO][4560] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.19.195/26] block=192.168.19.192/26 handle="k8s-pod-network.6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.488 [INFO][4560] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.19.195/26] handle="k8s-pod-network.6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.488 [INFO][4560] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:36.510593 containerd[1502]: 2026-04-13 20:13:36.488 [INFO][4560] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.19.195/26] IPv6=[] ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" HandleID="k8s-pod-network.6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.511103 containerd[1502]: 2026-04-13 20:13:36.490 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Namespace="kube-system" Pod="coredns-674b8bbfcf-dr5n9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bbf82d0e-deb2-40ec-a573-b417e42188bc", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"", Pod:"coredns-674b8bbfcf-dr5n9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aeda35f57f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:36.511103 containerd[1502]: 2026-04-13 20:13:36.490 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.195/32] ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Namespace="kube-system" Pod="coredns-674b8bbfcf-dr5n9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.511103 containerd[1502]: 2026-04-13 20:13:36.491 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0aeda35f57f ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Namespace="kube-system" Pod="coredns-674b8bbfcf-dr5n9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.511103 containerd[1502]: 2026-04-13 20:13:36.492 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Namespace="kube-system" Pod="coredns-674b8bbfcf-dr5n9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.511103 containerd[1502]: 2026-04-13 20:13:36.494 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Namespace="kube-system" Pod="coredns-674b8bbfcf-dr5n9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bbf82d0e-deb2-40ec-a573-b417e42188bc", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79", Pod:"coredns-674b8bbfcf-dr5n9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aeda35f57f", MAC:"be:bc:fd:39:f5:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:36.511103 containerd[1502]: 2026-04-13 20:13:36.506 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79" Namespace="kube-system" Pod="coredns-674b8bbfcf-dr5n9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:36.526355 containerd[1502]: time="2026-04-13T20:13:36.526071800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:36.526355 containerd[1502]: time="2026-04-13T20:13:36.526113169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:36.526355 containerd[1502]: time="2026-04-13T20:13:36.526123313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:36.526355 containerd[1502]: time="2026-04-13T20:13:36.526186752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:36.545888 systemd[1]: Started cri-containerd-6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79.scope - libcontainer container 6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79. Apr 13 20:13:36.580164 containerd[1502]: time="2026-04-13T20:13:36.580123828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dr5n9,Uid:bbf82d0e-deb2-40ec-a573-b417e42188bc,Namespace:kube-system,Attempt:1,} returns sandbox id \"6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79\"" Apr 13 20:13:36.590693 containerd[1502]: time="2026-04-13T20:13:36.590601944Z" level=info msg="CreateContainer within sandbox \"6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:13:36.605353 systemd-networkd[1414]: cali03063751a48: Link UP Apr 13 20:13:36.606610 systemd-networkd[1414]: cali03063751a48: Gained carrier Apr 13 20:13:36.613800 containerd[1502]: time="2026-04-13T20:13:36.613312757Z" level=info msg="CreateContainer within sandbox \"6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6a487cad573b264de9f6cf46c5acbbadd39b6fb3cf801762cca49303b5323f8\"" Apr 13 20:13:36.616465 containerd[1502]: time="2026-04-13T20:13:36.616381911Z" level=info msg="StartContainer for \"b6a487cad573b264de9f6cf46c5acbbadd39b6fb3cf801762cca49303b5323f8\"" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.430 [ERROR][4536] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.438 [INFO][4536] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0 calico-apiserver-75d956895- calico-system 5925fe08-2006-453b-ae7e-b5697562e697 947 0 2026-04-13 20:13:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75d956895 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-c-b0ece174b2 calico-apiserver-75d956895-42kqd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali03063751a48 [] [] }} ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Namespace="calico-system" Pod="calico-apiserver-75d956895-42kqd" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.438 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Namespace="calico-system" Pod="calico-apiserver-75d956895-42kqd" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.459 [INFO][4562] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" HandleID="k8s-pod-network.51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.464 [INFO][4562] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" HandleID="k8s-pod-network.51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277f60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-c-b0ece174b2", "pod":"calico-apiserver-75d956895-42kqd", "timestamp":"2026-04-13 20:13:36.459741324 +0000 UTC"}, Hostname:"ci-4081-3-7-c-b0ece174b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001de580)} Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.464 [INFO][4562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.488 [INFO][4562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.488 [INFO][4562] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-c-b0ece174b2' Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.568 [INFO][4562] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.575 [INFO][4562] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.580 [INFO][4562] ipam/ipam.go 526: Trying affinity for 192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.582 [INFO][4562] ipam/ipam.go 160: Attempting to load block cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.586 [INFO][4562] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.586 [INFO][4562] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.588 [INFO][4562] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0 Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.592 [INFO][4562] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.597 [INFO][4562] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.19.196/26] block=192.168.19.192/26 handle="k8s-pod-network.51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.597 [INFO][4562] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.19.196/26] handle="k8s-pod-network.51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.597 [INFO][4562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:36.622216 containerd[1502]: 2026-04-13 20:13:36.597 [INFO][4562] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.19.196/26] IPv6=[] ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" HandleID="k8s-pod-network.51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.622679 containerd[1502]: 2026-04-13 20:13:36.600 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Namespace="calico-system" Pod="calico-apiserver-75d956895-42kqd" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0", GenerateName:"calico-apiserver-75d956895-", Namespace:"calico-system", SelfLink:"", UID:"5925fe08-2006-453b-ae7e-b5697562e697", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d956895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"", Pod:"calico-apiserver-75d956895-42kqd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03063751a48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:36.622679 containerd[1502]: 2026-04-13 20:13:36.601 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.196/32] ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Namespace="calico-system" Pod="calico-apiserver-75d956895-42kqd" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.622679 containerd[1502]: 2026-04-13 20:13:36.601 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03063751a48 ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Namespace="calico-system" Pod="calico-apiserver-75d956895-42kqd" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.622679 containerd[1502]: 2026-04-13 20:13:36.604 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Namespace="calico-system" Pod="calico-apiserver-75d956895-42kqd" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.622679 containerd[1502]: 2026-04-13 20:13:36.604 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Namespace="calico-system" Pod="calico-apiserver-75d956895-42kqd" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0", GenerateName:"calico-apiserver-75d956895-", Namespace:"calico-system", SelfLink:"", UID:"5925fe08-2006-453b-ae7e-b5697562e697", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d956895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0", Pod:"calico-apiserver-75d956895-42kqd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03063751a48", MAC:"7a:90:07:89:5d:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:36.622679 containerd[1502]: 2026-04-13 20:13:36.617 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0" Namespace="calico-system" Pod="calico-apiserver-75d956895-42kqd" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:36.645487 containerd[1502]: time="2026-04-13T20:13:36.644951036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:36.645487 containerd[1502]: time="2026-04-13T20:13:36.645062736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:36.645487 containerd[1502]: time="2026-04-13T20:13:36.645102154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:36.645960 containerd[1502]: time="2026-04-13T20:13:36.645923805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:36.649897 systemd[1]: Started cri-containerd-b6a487cad573b264de9f6cf46c5acbbadd39b6fb3cf801762cca49303b5323f8.scope - libcontainer container b6a487cad573b264de9f6cf46c5acbbadd39b6fb3cf801762cca49303b5323f8. Apr 13 20:13:36.665917 systemd[1]: Started cri-containerd-51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0.scope - libcontainer container 51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0. Apr 13 20:13:36.693585 containerd[1502]: time="2026-04-13T20:13:36.692139638Z" level=info msg="StartContainer for \"b6a487cad573b264de9f6cf46c5acbbadd39b6fb3cf801762cca49303b5323f8\" returns successfully" Apr 13 20:13:36.733797 containerd[1502]: time="2026-04-13T20:13:36.733723734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d956895-42kqd,Uid:5925fe08-2006-453b-ae7e-b5697562e697,Namespace:calico-system,Attempt:1,} returns sandbox id \"51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0\"" Apr 13 20:13:36.740784 containerd[1502]: time="2026-04-13T20:13:36.740499070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:13:37.543491 kubelet[2564]: I0413 20:13:37.543087 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dr5n9" podStartSLOduration=38.543065414 podStartE2EDuration="38.543065414s" podCreationTimestamp="2026-04-13 20:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:13:37.542820347 +0000 UTC m=+45.362237053" watchObservedRunningTime="2026-04-13 20:13:37.543065414 +0000 UTC m=+45.362482150" Apr 13 20:13:38.265300 containerd[1502]: time="2026-04-13T20:13:38.264116569Z" level=info msg="StopPodSandbox for \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\"" Apr 13 20:13:38.274960 systemd-networkd[1414]: cali0aeda35f57f: Gained IPv6LL Apr 13 20:13:38.336352 systemd-networkd[1414]: cali03063751a48: Gained IPv6LL Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.349 [INFO][4753] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.350 [INFO][4753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" iface="eth0" netns="/var/run/netns/cni-7922b0db-cf3a-f594-4ff2-4b141f73d93c" Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.350 [INFO][4753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" iface="eth0" netns="/var/run/netns/cni-7922b0db-cf3a-f594-4ff2-4b141f73d93c" Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.351 [INFO][4753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" iface="eth0" netns="/var/run/netns/cni-7922b0db-cf3a-f594-4ff2-4b141f73d93c" Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.351 [INFO][4753] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.351 [INFO][4753] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.371 [INFO][4760] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" HandleID="k8s-pod-network.561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.372 [INFO][4760] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.372 [INFO][4760] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.379 [WARNING][4760] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" HandleID="k8s-pod-network.561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.379 [INFO][4760] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" HandleID="k8s-pod-network.561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.381 [INFO][4760] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:38.390787 containerd[1502]: 2026-04-13 20:13:38.383 [INFO][4753] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:38.392123 containerd[1502]: time="2026-04-13T20:13:38.391452140Z" level=info msg="TearDown network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\" successfully" Apr 13 20:13:38.392123 containerd[1502]: time="2026-04-13T20:13:38.391525028Z" level=info msg="StopPodSandbox for \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\" returns successfully" Apr 13 20:13:38.393473 systemd[1]: run-netns-cni\x2d7922b0db\x2dcf3a\x2df594\x2d4ff2\x2d4b141f73d93c.mount: Deactivated successfully. Apr 13 20:13:38.395581 containerd[1502]: time="2026-04-13T20:13:38.395032751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c74bf58b8-pnfc5,Uid:a4e90b81-acc5-4fe5-b623-c177b554394d,Namespace:calico-system,Attempt:1,}" Apr 13 20:13:38.524003 systemd-networkd[1414]: calidc50e08f996: Link UP Apr 13 20:13:38.525190 systemd-networkd[1414]: calidc50e08f996: Gained carrier Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.439 [ERROR][4767] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.452 [INFO][4767] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0 calico-kube-controllers-6c74bf58b8- calico-system a4e90b81-acc5-4fe5-b623-c177b554394d 966 0 2026-04-13 20:13:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c74bf58b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-c-b0ece174b2 calico-kube-controllers-6c74bf58b8-pnfc5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidc50e08f996 [] [] }} ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Namespace="calico-system" Pod="calico-kube-controllers-6c74bf58b8-pnfc5" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.452 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Namespace="calico-system" Pod="calico-kube-controllers-6c74bf58b8-pnfc5" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.477 [INFO][4779] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" HandleID="k8s-pod-network.19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.483 [INFO][4779] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" HandleID="k8s-pod-network.19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-c-b0ece174b2", "pod":"calico-kube-controllers-6c74bf58b8-pnfc5", "timestamp":"2026-04-13 20:13:38.477849605 +0000 UTC"}, Hostname:"ci-4081-3-7-c-b0ece174b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001126e0)} Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.483 [INFO][4779] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.484 [INFO][4779] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.484 [INFO][4779] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-c-b0ece174b2' Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.486 [INFO][4779] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.489 [INFO][4779] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.493 [INFO][4779] ipam/ipam.go 526: Trying affinity for 192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.494 [INFO][4779] ipam/ipam.go 160: Attempting to load block cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.496 [INFO][4779] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.496 [INFO][4779] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.498 [INFO][4779] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552 Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.501 [INFO][4779] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.506 [INFO][4779] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.19.197/26] block=192.168.19.192/26 handle="k8s-pod-network.19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.506 [INFO][4779] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.19.197/26] handle="k8s-pod-network.19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.506 [INFO][4779] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:38.546994 containerd[1502]: 2026-04-13 20:13:38.506 [INFO][4779] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.19.197/26] IPv6=[] ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" HandleID="k8s-pod-network.19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.547477 containerd[1502]: 2026-04-13 20:13:38.510 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Namespace="calico-system" Pod="calico-kube-controllers-6c74bf58b8-pnfc5" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0", GenerateName:"calico-kube-controllers-6c74bf58b8-", Namespace:"calico-system", SelfLink:"", UID:"a4e90b81-acc5-4fe5-b623-c177b554394d", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c74bf58b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"", Pod:"calico-kube-controllers-6c74bf58b8-pnfc5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc50e08f996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:38.547477 containerd[1502]: 2026-04-13 20:13:38.510 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.197/32] ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Namespace="calico-system" Pod="calico-kube-controllers-6c74bf58b8-pnfc5" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.547477 containerd[1502]: 2026-04-13 20:13:38.510 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc50e08f996 ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Namespace="calico-system" Pod="calico-kube-controllers-6c74bf58b8-pnfc5" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.547477 containerd[1502]: 2026-04-13 20:13:38.525 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Namespace="calico-system" Pod="calico-kube-controllers-6c74bf58b8-pnfc5" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.547477 containerd[1502]: 2026-04-13 20:13:38.527 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Namespace="calico-system" Pod="calico-kube-controllers-6c74bf58b8-pnfc5" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0", GenerateName:"calico-kube-controllers-6c74bf58b8-", Namespace:"calico-system", SelfLink:"", UID:"a4e90b81-acc5-4fe5-b623-c177b554394d", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c74bf58b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552", Pod:"calico-kube-controllers-6c74bf58b8-pnfc5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc50e08f996", MAC:"1a:14:23:c3:5f:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:38.547477 containerd[1502]: 2026-04-13 20:13:38.542 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552" Namespace="calico-system" Pod="calico-kube-controllers-6c74bf58b8-pnfc5" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:38.600157 containerd[1502]: time="2026-04-13T20:13:38.599429158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:38.600157 containerd[1502]: time="2026-04-13T20:13:38.599480327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:38.600157 containerd[1502]: time="2026-04-13T20:13:38.599494653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:38.600157 containerd[1502]: time="2026-04-13T20:13:38.599592430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:38.631383 systemd[1]: Started cri-containerd-19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552.scope - libcontainer container 19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552. Apr 13 20:13:38.667352 containerd[1502]: time="2026-04-13T20:13:38.667289480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c74bf58b8-pnfc5,Uid:a4e90b81-acc5-4fe5-b623-c177b554394d,Namespace:calico-system,Attempt:1,} returns sandbox id \"19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552\"" Apr 13 20:13:38.896506 kubelet[2564]: I0413 20:13:38.896190 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:13:39.168563 containerd[1502]: time="2026-04-13T20:13:39.168430053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:39.169469 containerd[1502]: time="2026-04-13T20:13:39.169435273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:13:39.170412 containerd[1502]: time="2026-04-13T20:13:39.170232259Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:39.172075 containerd[1502]: time="2026-04-13T20:13:39.172054924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:39.172601 containerd[1502]: time="2026-04-13T20:13:39.172576831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.431929014s" Apr 13 20:13:39.172632 containerd[1502]: time="2026-04-13T20:13:39.172602910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:13:39.173732 containerd[1502]: time="2026-04-13T20:13:39.173680197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:13:39.176296 containerd[1502]: time="2026-04-13T20:13:39.176273067Z" level=info msg="CreateContainer within sandbox \"51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:13:39.198164 containerd[1502]: time="2026-04-13T20:13:39.198070149Z" level=info msg="CreateContainer within sandbox \"51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"958ff6fc14d5a48c2feae1b5b787f11a4e5272b7f8c2bf35a70267e144fb43e7\"" Apr 13 20:13:39.198584 containerd[1502]: time="2026-04-13T20:13:39.198548622Z" level=info msg="StartContainer for \"958ff6fc14d5a48c2feae1b5b787f11a4e5272b7f8c2bf35a70267e144fb43e7\"" Apr 13 20:13:39.220871 systemd[1]: Started cri-containerd-958ff6fc14d5a48c2feae1b5b787f11a4e5272b7f8c2bf35a70267e144fb43e7.scope - libcontainer container 958ff6fc14d5a48c2feae1b5b787f11a4e5272b7f8c2bf35a70267e144fb43e7. Apr 13 20:13:39.262323 containerd[1502]: time="2026-04-13T20:13:39.262199355Z" level=info msg="StartContainer for \"958ff6fc14d5a48c2feae1b5b787f11a4e5272b7f8c2bf35a70267e144fb43e7\" returns successfully" Apr 13 20:13:39.551271 kubelet[2564]: I0413 20:13:39.551090 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-75d956895-42kqd" podStartSLOduration=28.11783681 podStartE2EDuration="30.55105505s" podCreationTimestamp="2026-04-13 20:13:09 +0000 UTC" firstStartedPulling="2026-04-13 20:13:36.73996771 +0000 UTC m=+44.559384416" lastFinishedPulling="2026-04-13 20:13:39.17318595 +0000 UTC m=+46.992602656" observedRunningTime="2026-04-13 20:13:39.550201563 +0000 UTC m=+47.369618299" watchObservedRunningTime="2026-04-13 20:13:39.55105505 +0000 UTC m=+47.370471786" Apr 13 20:13:40.263449 containerd[1502]: time="2026-04-13T20:13:40.263308032Z" level=info msg="StopPodSandbox for \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\"" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.328 [INFO][4956] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.328 [INFO][4956] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" iface="eth0" netns="/var/run/netns/cni-c87155cd-176e-c1d7-20ab-dd1521b282b0" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.328 [INFO][4956] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" iface="eth0" netns="/var/run/netns/cni-c87155cd-176e-c1d7-20ab-dd1521b282b0" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.328 [INFO][4956] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" iface="eth0" netns="/var/run/netns/cni-c87155cd-176e-c1d7-20ab-dd1521b282b0" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.328 [INFO][4956] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.328 [INFO][4956] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.352 [INFO][4964] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" HandleID="k8s-pod-network.27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.352 [INFO][4964] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.352 [INFO][4964] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.357 [WARNING][4964] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" HandleID="k8s-pod-network.27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.357 [INFO][4964] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" HandleID="k8s-pod-network.27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.358 [INFO][4964] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:40.364700 containerd[1502]: 2026-04-13 20:13:40.362 [INFO][4956] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:40.365079 containerd[1502]: time="2026-04-13T20:13:40.364960468Z" level=info msg="TearDown network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\" successfully" Apr 13 20:13:40.365079 containerd[1502]: time="2026-04-13T20:13:40.365002302Z" level=info msg="StopPodSandbox for \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\" returns successfully" Apr 13 20:13:40.365867 containerd[1502]: time="2026-04-13T20:13:40.365842937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d956895-66fp9,Uid:1c1bb0e0-ee02-473c-a263-cdfa973e52e6,Namespace:calico-system,Attempt:1,}" Apr 13 20:13:40.369392 systemd[1]: run-netns-cni\x2dc87155cd\x2d176e\x2dc1d7\x2d20ab\x2ddd1521b282b0.mount: Deactivated successfully. Apr 13 20:13:40.482151 systemd-networkd[1414]: cali56495e50342: Link UP Apr 13 20:13:40.482357 systemd-networkd[1414]: cali56495e50342: Gained carrier Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.406 [ERROR][4975] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.413 [INFO][4975] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0 calico-apiserver-75d956895- calico-system 1c1bb0e0-ee02-473c-a263-cdfa973e52e6 1004 0 2026-04-13 20:13:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75d956895 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-c-b0ece174b2 calico-apiserver-75d956895-66fp9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali56495e50342 [] [] }} ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Namespace="calico-system" Pod="calico-apiserver-75d956895-66fp9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.413 [INFO][4975] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Namespace="calico-system" Pod="calico-apiserver-75d956895-66fp9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.437 [INFO][4986] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" HandleID="k8s-pod-network.cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.442 [INFO][4986] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" HandleID="k8s-pod-network.cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364550), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-c-b0ece174b2", "pod":"calico-apiserver-75d956895-66fp9", "timestamp":"2026-04-13 20:13:40.436995458 +0000 UTC"}, Hostname:"ci-4081-3-7-c-b0ece174b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a11e0)} Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.442 [INFO][4986] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.442 [INFO][4986] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.442 [INFO][4986] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-c-b0ece174b2' Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.445 [INFO][4986] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.449 [INFO][4986] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.455 [INFO][4986] ipam/ipam.go 526: Trying affinity for 192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.457 [INFO][4986] ipam/ipam.go 160: Attempting to load block cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.458 [INFO][4986] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.459 [INFO][4986] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.460 [INFO][4986] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.465 [INFO][4986] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.471 [INFO][4986] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.19.198/26] block=192.168.19.192/26 handle="k8s-pod-network.cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.471 [INFO][4986] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.19.198/26] handle="k8s-pod-network.cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.471 [INFO][4986] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:40.498717 containerd[1502]: 2026-04-13 20:13:40.471 [INFO][4986] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.19.198/26] IPv6=[] ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" HandleID="k8s-pod-network.cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.499189 containerd[1502]: 2026-04-13 20:13:40.474 [INFO][4975] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Namespace="calico-system" Pod="calico-apiserver-75d956895-66fp9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0", GenerateName:"calico-apiserver-75d956895-", Namespace:"calico-system", SelfLink:"", UID:"1c1bb0e0-ee02-473c-a263-cdfa973e52e6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d956895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"", Pod:"calico-apiserver-75d956895-66fp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56495e50342", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:40.499189 containerd[1502]: 2026-04-13 20:13:40.475 [INFO][4975] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.198/32] ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Namespace="calico-system" Pod="calico-apiserver-75d956895-66fp9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.499189 containerd[1502]: 2026-04-13 20:13:40.475 [INFO][4975] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56495e50342 ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Namespace="calico-system" Pod="calico-apiserver-75d956895-66fp9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.499189 containerd[1502]: 2026-04-13 20:13:40.477 [INFO][4975] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Namespace="calico-system" Pod="calico-apiserver-75d956895-66fp9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.499189 containerd[1502]: 2026-04-13 20:13:40.480 [INFO][4975] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Namespace="calico-system" Pod="calico-apiserver-75d956895-66fp9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0", GenerateName:"calico-apiserver-75d956895-", Namespace:"calico-system", SelfLink:"", UID:"1c1bb0e0-ee02-473c-a263-cdfa973e52e6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d956895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda", Pod:"calico-apiserver-75d956895-66fp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56495e50342", MAC:"32:c5:0b:15:44:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:40.499189 containerd[1502]: 2026-04-13 20:13:40.493 [INFO][4975] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda" Namespace="calico-system" Pod="calico-apiserver-75d956895-66fp9" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:40.523306 containerd[1502]: time="2026-04-13T20:13:40.522258074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:40.524077 containerd[1502]: time="2026-04-13T20:13:40.523527975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:40.524394 containerd[1502]: time="2026-04-13T20:13:40.524371431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:40.525079 containerd[1502]: time="2026-04-13T20:13:40.524712576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:40.561199 systemd[1]: run-containerd-runc-k8s.io-cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda-runc.sqyXIe.mount: Deactivated successfully. Apr 13 20:13:40.572655 systemd[1]: Started cri-containerd-cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda.scope - libcontainer container cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda. Apr 13 20:13:40.575944 systemd-networkd[1414]: calidc50e08f996: Gained IPv6LL Apr 13 20:13:40.622774 kernel: calico-node[4991]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:13:40.646157 containerd[1502]: time="2026-04-13T20:13:40.646123312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d956895-66fp9,Uid:1c1bb0e0-ee02-473c-a263-cdfa973e52e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda\"" Apr 13 20:13:40.650964 containerd[1502]: time="2026-04-13T20:13:40.650934154Z" level=info msg="CreateContainer within sandbox \"cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:13:40.664513 containerd[1502]: time="2026-04-13T20:13:40.662658590Z" level=info msg="CreateContainer within sandbox \"cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2c0af764b73d41ee3ede600f9c09aac81b278817d92c06d0ab34392e76ef97f9\"" Apr 13 20:13:40.664513 containerd[1502]: time="2026-04-13T20:13:40.663346624Z" level=info msg="StartContainer for \"2c0af764b73d41ee3ede600f9c09aac81b278817d92c06d0ab34392e76ef97f9\"" Apr 13 20:13:40.708215 systemd[1]: Started cri-containerd-2c0af764b73d41ee3ede600f9c09aac81b278817d92c06d0ab34392e76ef97f9.scope - libcontainer container 2c0af764b73d41ee3ede600f9c09aac81b278817d92c06d0ab34392e76ef97f9. Apr 13 20:13:40.762261 containerd[1502]: time="2026-04-13T20:13:40.762212884Z" level=info msg="StartContainer for \"2c0af764b73d41ee3ede600f9c09aac81b278817d92c06d0ab34392e76ef97f9\" returns successfully" Apr 13 20:13:41.267556 containerd[1502]: time="2026-04-13T20:13:41.267519089Z" level=info msg="StopPodSandbox for \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\"" Apr 13 20:13:41.268670 containerd[1502]: time="2026-04-13T20:13:41.267885366Z" level=info msg="StopPodSandbox for \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\"" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.360 [INFO][5124] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.361 [INFO][5124] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" iface="eth0" netns="/var/run/netns/cni-d9819e86-0bc6-fafc-895b-64fbfee9afd2" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.361 [INFO][5124] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" iface="eth0" netns="/var/run/netns/cni-d9819e86-0bc6-fafc-895b-64fbfee9afd2" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.362 [INFO][5124] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" iface="eth0" netns="/var/run/netns/cni-d9819e86-0bc6-fafc-895b-64fbfee9afd2" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.362 [INFO][5124] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.362 [INFO][5124] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.409 [INFO][5149] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" HandleID="k8s-pod-network.eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.409 [INFO][5149] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.409 [INFO][5149] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.419 [WARNING][5149] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" HandleID="k8s-pod-network.eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.420 [INFO][5149] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" HandleID="k8s-pod-network.eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.422 [INFO][5149] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:41.432193 containerd[1502]: 2026-04-13 20:13:41.424 [INFO][5124] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:41.432193 containerd[1502]: time="2026-04-13T20:13:41.432027946Z" level=info msg="TearDown network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\" successfully" Apr 13 20:13:41.432193 containerd[1502]: time="2026-04-13T20:13:41.432049523Z" level=info msg="StopPodSandbox for \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\" returns successfully" Apr 13 20:13:41.434240 containerd[1502]: time="2026-04-13T20:13:41.434222349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-c6ngr,Uid:487ea9ef-6892-48d7-a2fa-05a0f1fc06fa,Namespace:calico-system,Attempt:1,}" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.357 [INFO][5123] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.357 [INFO][5123] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" iface="eth0" netns="/var/run/netns/cni-b9f102d0-8cef-8064-c897-49ed1a49f865" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.358 [INFO][5123] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" iface="eth0" netns="/var/run/netns/cni-b9f102d0-8cef-8064-c897-49ed1a49f865" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.359 [INFO][5123] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" iface="eth0" netns="/var/run/netns/cni-b9f102d0-8cef-8064-c897-49ed1a49f865" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.359 [INFO][5123] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.359 [INFO][5123] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.412 [INFO][5147] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" HandleID="k8s-pod-network.5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.412 [INFO][5147] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.422 [INFO][5147] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.427 [WARNING][5147] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" HandleID="k8s-pod-network.5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.427 [INFO][5147] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" HandleID="k8s-pod-network.5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.429 [INFO][5147] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:41.440071 containerd[1502]: 2026-04-13 20:13:41.431 [INFO][5123] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:41.440549 containerd[1502]: time="2026-04-13T20:13:41.440532463Z" level=info msg="TearDown network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\" successfully" Apr 13 20:13:41.440595 containerd[1502]: time="2026-04-13T20:13:41.440586690Z" level=info msg="StopPodSandbox for \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\" returns successfully" Apr 13 20:13:41.441221 containerd[1502]: time="2026-04-13T20:13:41.441206599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvx68,Uid:650a7fe1-f630-4ffa-8ebb-7c8ab54e8781,Namespace:kube-system,Attempt:1,}" Apr 13 20:13:41.483906 systemd-networkd[1414]: vxlan.calico: Link UP Apr 13 20:13:41.483913 systemd-networkd[1414]: vxlan.calico: Gained carrier Apr 13 20:13:41.544402 systemd[1]: run-netns-cni\x2dd9819e86\x2d0bc6\x2dfafc\x2d895b\x2d64fbfee9afd2.mount: Deactivated successfully. Apr 13 20:13:41.544481 systemd[1]: run-netns-cni\x2db9f102d0\x2d8cef\x2d8064\x2dc897\x2d49ed1a49f865.mount: Deactivated successfully. Apr 13 20:13:41.567634 kubelet[2564]: I0413 20:13:41.567581 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-75d956895-66fp9" podStartSLOduration=32.56755086 podStartE2EDuration="32.56755086s" podCreationTimestamp="2026-04-13 20:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:13:41.56730193 +0000 UTC m=+49.386718626" watchObservedRunningTime="2026-04-13 20:13:41.56755086 +0000 UTC m=+49.386967566" Apr 13 20:13:41.738087 systemd-networkd[1414]: cali1f2f9b31ec4: Link UP Apr 13 20:13:41.741697 systemd-networkd[1414]: cali1f2f9b31ec4: Gained carrier Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.606 [INFO][5183] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0 coredns-674b8bbfcf- kube-system 650a7fe1-f630-4ffa-8ebb-7c8ab54e8781 1024 0 2026-04-13 20:12:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-c-b0ece174b2 coredns-674b8bbfcf-fvx68 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f2f9b31ec4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvx68" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.606 [INFO][5183] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvx68" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.641 [INFO][5210] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" HandleID="k8s-pod-network.10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.658 [INFO][5210] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" HandleID="k8s-pod-network.10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-c-b0ece174b2", "pod":"coredns-674b8bbfcf-fvx68", "timestamp":"2026-04-13 20:13:41.641869479 +0000 UTC"}, Hostname:"ci-4081-3-7-c-b0ece174b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c9080)} Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.658 [INFO][5210] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.658 [INFO][5210] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.658 [INFO][5210] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-c-b0ece174b2' Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.669 [INFO][5210] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.684 [INFO][5210] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.691 [INFO][5210] ipam/ipam.go 526: Trying affinity for 192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.695 [INFO][5210] ipam/ipam.go 160: Attempting to load block cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.697 [INFO][5210] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.698 [INFO][5210] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.699 [INFO][5210] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19 Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.703 [INFO][5210] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.713 [INFO][5210] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.19.199/26] block=192.168.19.192/26 handle="k8s-pod-network.10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.713 [INFO][5210] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.19.199/26] handle="k8s-pod-network.10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.713 [INFO][5210] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:41.762657 containerd[1502]: 2026-04-13 20:13:41.713 [INFO][5210] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.19.199/26] IPv6=[] ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" HandleID="k8s-pod-network.10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.764104 containerd[1502]: 2026-04-13 20:13:41.724 [INFO][5183] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvx68" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"650a7fe1-f630-4ffa-8ebb-7c8ab54e8781", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"", Pod:"coredns-674b8bbfcf-fvx68", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f2f9b31ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:41.764104 containerd[1502]: 2026-04-13 20:13:41.724 [INFO][5183] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.199/32] ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvx68" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.764104 containerd[1502]: 2026-04-13 20:13:41.725 [INFO][5183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f2f9b31ec4 ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvx68" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.764104 containerd[1502]: 2026-04-13 20:13:41.742 [INFO][5183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvx68" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.764104 containerd[1502]: 2026-04-13 20:13:41.745 [INFO][5183] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvx68" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"650a7fe1-f630-4ffa-8ebb-7c8ab54e8781", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19", Pod:"coredns-674b8bbfcf-fvx68", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f2f9b31ec4", MAC:"82:b9:ee:5c:e4:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:41.764104 containerd[1502]: 2026-04-13 20:13:41.759 [INFO][5183] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19" Namespace="kube-system" Pod="coredns-674b8bbfcf-fvx68" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:41.819015 containerd[1502]: time="2026-04-13T20:13:41.816393679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:41.819015 containerd[1502]: time="2026-04-13T20:13:41.816457809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:41.819015 containerd[1502]: time="2026-04-13T20:13:41.816468172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:41.819173 containerd[1502]: time="2026-04-13T20:13:41.818891850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:41.822192 systemd-networkd[1414]: calia3b0227342b: Link UP Apr 13 20:13:41.822925 systemd-networkd[1414]: calia3b0227342b: Gained carrier Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.605 [INFO][5168] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0 goldmane-5b85766d88- calico-system 487ea9ef-6892-48d7-a2fa-05a0f1fc06fa 1025 0 2026-04-13 20:13:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-c-b0ece174b2 goldmane-5b85766d88-c6ngr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia3b0227342b [] [] }} ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Namespace="calico-system" Pod="goldmane-5b85766d88-c6ngr" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.605 [INFO][5168] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Namespace="calico-system" Pod="goldmane-5b85766d88-c6ngr" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.680 [INFO][5212] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" HandleID="k8s-pod-network.a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.690 [INFO][5212] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" HandleID="k8s-pod-network.a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a2010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-c-b0ece174b2", "pod":"goldmane-5b85766d88-c6ngr", "timestamp":"2026-04-13 20:13:41.680307454 +0000 UTC"}, Hostname:"ci-4081-3-7-c-b0ece174b2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000284580)} Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.690 [INFO][5212] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.713 [INFO][5212] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.713 [INFO][5212] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-c-b0ece174b2' Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.763 [INFO][5212] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.785 [INFO][5212] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.793 [INFO][5212] ipam/ipam.go 526: Trying affinity for 192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.796 [INFO][5212] ipam/ipam.go 160: Attempting to load block cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.800 [INFO][5212] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.800 [INFO][5212] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.802 [INFO][5212] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2 Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.809 [INFO][5212] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.815 [INFO][5212] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.19.200/26] block=192.168.19.192/26 handle="k8s-pod-network.a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.815 [INFO][5212] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.19.200/26] handle="k8s-pod-network.a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" host="ci-4081-3-7-c-b0ece174b2" Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.815 [INFO][5212] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:41.853444 containerd[1502]: 2026-04-13 20:13:41.815 [INFO][5212] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.19.200/26] IPv6=[] ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" HandleID="k8s-pod-network.a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.854246 containerd[1502]: 2026-04-13 20:13:41.819 [INFO][5168] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Namespace="calico-system" Pod="goldmane-5b85766d88-c6ngr" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"", Pod:"goldmane-5b85766d88-c6ngr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3b0227342b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:41.854246 containerd[1502]: 2026-04-13 20:13:41.819 [INFO][5168] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.200/32] ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Namespace="calico-system" Pod="goldmane-5b85766d88-c6ngr" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.854246 containerd[1502]: 2026-04-13 20:13:41.819 [INFO][5168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3b0227342b ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Namespace="calico-system" Pod="goldmane-5b85766d88-c6ngr" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.854246 containerd[1502]: 2026-04-13 20:13:41.823 [INFO][5168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Namespace="calico-system" Pod="goldmane-5b85766d88-c6ngr" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.854246 containerd[1502]: 2026-04-13 20:13:41.825 [INFO][5168] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Namespace="calico-system" Pod="goldmane-5b85766d88-c6ngr" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2", Pod:"goldmane-5b85766d88-c6ngr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3b0227342b", MAC:"36:6e:17:26:ec:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:41.854246 containerd[1502]: 2026-04-13 20:13:41.834 [INFO][5168] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2" Namespace="calico-system" Pod="goldmane-5b85766d88-c6ngr" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:41.884277 systemd[1]: Started cri-containerd-10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19.scope - libcontainer container 10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19. Apr 13 20:13:41.910617 containerd[1502]: time="2026-04-13T20:13:41.910403962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:13:41.910617 containerd[1502]: time="2026-04-13T20:13:41.910444275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:13:41.910617 containerd[1502]: time="2026-04-13T20:13:41.910462890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:41.910617 containerd[1502]: time="2026-04-13T20:13:41.910540885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:13:41.951164 systemd[1]: Started cri-containerd-a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2.scope - libcontainer container a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2. Apr 13 20:13:41.983334 containerd[1502]: time="2026-04-13T20:13:41.983267325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fvx68,Uid:650a7fe1-f630-4ffa-8ebb-7c8ab54e8781,Namespace:kube-system,Attempt:1,} returns sandbox id \"10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19\"" Apr 13 20:13:41.983951 systemd-networkd[1414]: cali56495e50342: Gained IPv6LL Apr 13 20:13:41.990527 containerd[1502]: time="2026-04-13T20:13:41.990498833Z" level=info msg="CreateContainer within sandbox \"10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:13:42.014246 containerd[1502]: time="2026-04-13T20:13:42.013987284Z" level=info msg="CreateContainer within sandbox \"10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbf57d09d14043b4ef3fbff9179b2f0ebbe954970e29a356e471e32560e1708b\"" Apr 13 20:13:42.015162 containerd[1502]: time="2026-04-13T20:13:42.015140464Z" level=info msg="StartContainer for \"cbf57d09d14043b4ef3fbff9179b2f0ebbe954970e29a356e471e32560e1708b\"" Apr 13 20:13:42.036345 containerd[1502]: time="2026-04-13T20:13:42.036277245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-c6ngr,Uid:487ea9ef-6892-48d7-a2fa-05a0f1fc06fa,Namespace:calico-system,Attempt:1,} returns sandbox id \"a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2\"" Apr 13 20:13:42.082899 systemd[1]: Started cri-containerd-cbf57d09d14043b4ef3fbff9179b2f0ebbe954970e29a356e471e32560e1708b.scope - libcontainer container cbf57d09d14043b4ef3fbff9179b2f0ebbe954970e29a356e471e32560e1708b. Apr 13 20:13:42.152713 containerd[1502]: time="2026-04-13T20:13:42.152682560Z" level=info msg="StartContainer for \"cbf57d09d14043b4ef3fbff9179b2f0ebbe954970e29a356e471e32560e1708b\" returns successfully" Apr 13 20:13:42.569612 kubelet[2564]: I0413 20:13:42.569045 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fvx68" podStartSLOduration=43.569031544 podStartE2EDuration="43.569031544s" podCreationTimestamp="2026-04-13 20:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:13:42.568112085 +0000 UTC m=+50.387528781" watchObservedRunningTime="2026-04-13 20:13:42.569031544 +0000 UTC m=+50.388448250" Apr 13 20:13:42.735016 containerd[1502]: time="2026-04-13T20:13:42.734965409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:42.736402 containerd[1502]: time="2026-04-13T20:13:42.736367554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:13:42.737219 containerd[1502]: time="2026-04-13T20:13:42.737187143Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:42.738808 containerd[1502]: time="2026-04-13T20:13:42.738790329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:42.739572 containerd[1502]: time="2026-04-13T20:13:42.739225151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.565522316s" Apr 13 20:13:42.739572 containerd[1502]: time="2026-04-13T20:13:42.739248868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:13:42.740612 containerd[1502]: time="2026-04-13T20:13:42.740598537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:13:42.752769 containerd[1502]: time="2026-04-13T20:13:42.751978869Z" level=info msg="CreateContainer within sandbox \"19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:13:42.763533 containerd[1502]: time="2026-04-13T20:13:42.763504775Z" level=info msg="CreateContainer within sandbox \"19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"75bebb366badbbf94f1c526d5cbdb6ce7a834ceea81950d4dff0a5ae7f6949c8\"" Apr 13 20:13:42.764819 containerd[1502]: time="2026-04-13T20:13:42.764380290Z" level=info msg="StartContainer for \"75bebb366badbbf94f1c526d5cbdb6ce7a834ceea81950d4dff0a5ae7f6949c8\"" Apr 13 20:13:42.798904 systemd[1]: Started cri-containerd-75bebb366badbbf94f1c526d5cbdb6ce7a834ceea81950d4dff0a5ae7f6949c8.scope - libcontainer container 75bebb366badbbf94f1c526d5cbdb6ce7a834ceea81950d4dff0a5ae7f6949c8. Apr 13 20:13:42.837574 containerd[1502]: time="2026-04-13T20:13:42.837538998Z" level=info msg="StartContainer for \"75bebb366badbbf94f1c526d5cbdb6ce7a834ceea81950d4dff0a5ae7f6949c8\" returns successfully" Apr 13 20:13:43.329136 systemd-networkd[1414]: calia3b0227342b: Gained IPv6LL Apr 13 20:13:43.391964 systemd-networkd[1414]: cali1f2f9b31ec4: Gained IPv6LL Apr 13 20:13:43.521111 systemd-networkd[1414]: vxlan.calico: Gained IPv6LL Apr 13 20:13:43.587633 kubelet[2564]: I0413 20:13:43.587430 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c74bf58b8-pnfc5" podStartSLOduration=30.515993942 podStartE2EDuration="34.587416205s" podCreationTimestamp="2026-04-13 20:13:09 +0000 UTC" firstStartedPulling="2026-04-13 20:13:38.668498239 +0000 UTC m=+46.487914945" lastFinishedPulling="2026-04-13 20:13:42.739920502 +0000 UTC m=+50.559337208" observedRunningTime="2026-04-13 20:13:43.58607508 +0000 UTC m=+51.405491776" watchObservedRunningTime="2026-04-13 20:13:43.587416205 +0000 UTC m=+51.406832911" Apr 13 20:13:44.870018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238410083.mount: Deactivated successfully. Apr 13 20:13:45.148464 containerd[1502]: time="2026-04-13T20:13:45.148348531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:45.149485 containerd[1502]: time="2026-04-13T20:13:45.149452176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:13:45.150426 containerd[1502]: time="2026-04-13T20:13:45.150386235Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:45.153318 containerd[1502]: time="2026-04-13T20:13:45.152772940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:13:45.153318 containerd[1502]: time="2026-04-13T20:13:45.153208212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.412537193s" Apr 13 20:13:45.153318 containerd[1502]: time="2026-04-13T20:13:45.153229668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:13:45.156912 containerd[1502]: time="2026-04-13T20:13:45.156894141Z" level=info msg="CreateContainer within sandbox \"a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:13:45.170160 containerd[1502]: time="2026-04-13T20:13:45.170132590Z" level=info msg="CreateContainer within sandbox \"a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7be07a5fa7498d2fbe98d58aca659461b4a36e18a8c9b9741cb88d33378695b3\"" Apr 13 20:13:45.171149 containerd[1502]: time="2026-04-13T20:13:45.171119884Z" level=info msg="StartContainer for \"7be07a5fa7498d2fbe98d58aca659461b4a36e18a8c9b9741cb88d33378695b3\"" Apr 13 20:13:45.212904 systemd[1]: Started cri-containerd-7be07a5fa7498d2fbe98d58aca659461b4a36e18a8c9b9741cb88d33378695b3.scope - libcontainer container 7be07a5fa7498d2fbe98d58aca659461b4a36e18a8c9b9741cb88d33378695b3. Apr 13 20:13:45.250846 containerd[1502]: time="2026-04-13T20:13:45.250681279Z" level=info msg="StartContainer for \"7be07a5fa7498d2fbe98d58aca659461b4a36e18a8c9b9741cb88d33378695b3\" returns successfully" Apr 13 20:13:45.667272 kubelet[2564]: I0413 20:13:45.667219 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-c6ngr" podStartSLOduration=33.551015639 podStartE2EDuration="36.667206275s" podCreationTimestamp="2026-04-13 20:13:09 +0000 UTC" firstStartedPulling="2026-04-13 20:13:42.037832266 +0000 UTC m=+49.857248972" lastFinishedPulling="2026-04-13 20:13:45.154022912 +0000 UTC m=+52.973439608" observedRunningTime="2026-04-13 20:13:45.587185611 +0000 UTC m=+53.406602347" watchObservedRunningTime="2026-04-13 20:13:45.667206275 +0000 UTC m=+53.486622981" Apr 13 20:13:52.258158 containerd[1502]: time="2026-04-13T20:13:52.258098906Z" level=info msg="StopPodSandbox for \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\"" Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.300 [WARNING][5608] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"650a7fe1-f630-4ffa-8ebb-7c8ab54e8781", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19", Pod:"coredns-674b8bbfcf-fvx68", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f2f9b31ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.301 [INFO][5608] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.301 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" iface="eth0" netns="" Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.302 [INFO][5608] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.302 [INFO][5608] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.335 [INFO][5616] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" HandleID="k8s-pod-network.5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.335 [INFO][5616] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.335 [INFO][5616] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.340 [WARNING][5616] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" HandleID="k8s-pod-network.5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.340 [INFO][5616] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" HandleID="k8s-pod-network.5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.341 [INFO][5616] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.345731 containerd[1502]: 2026-04-13 20:13:52.343 [INFO][5608] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:52.345731 containerd[1502]: time="2026-04-13T20:13:52.345607705Z" level=info msg="TearDown network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\" successfully" Apr 13 20:13:52.345731 containerd[1502]: time="2026-04-13T20:13:52.345627029Z" level=info msg="StopPodSandbox for \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\" returns successfully" Apr 13 20:13:52.346468 containerd[1502]: time="2026-04-13T20:13:52.346426082Z" level=info msg="RemovePodSandbox for \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\"" Apr 13 20:13:52.346468 containerd[1502]: time="2026-04-13T20:13:52.346471911Z" level=info msg="Forcibly stopping sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\"" Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.375 [WARNING][5630] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"650a7fe1-f630-4ffa-8ebb-7c8ab54e8781", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"10a908f7c1bc90d24532f986efc1f84275dabb57f07f9f0b437bb10ad06deb19", Pod:"coredns-674b8bbfcf-fvx68", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f2f9b31ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.375 [INFO][5630] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.375 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" iface="eth0" netns="" Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.375 [INFO][5630] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.375 [INFO][5630] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.397 [INFO][5638] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" HandleID="k8s-pod-network.5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.399 [INFO][5638] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.399 [INFO][5638] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.406 [WARNING][5638] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" HandleID="k8s-pod-network.5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.406 [INFO][5638] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" HandleID="k8s-pod-network.5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--fvx68-eth0" Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.408 [INFO][5638] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.415930 containerd[1502]: 2026-04-13 20:13:52.411 [INFO][5630] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738" Apr 13 20:13:52.416249 containerd[1502]: time="2026-04-13T20:13:52.415959805Z" level=info msg="TearDown network for sandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\" successfully" Apr 13 20:13:52.421039 containerd[1502]: time="2026-04-13T20:13:52.420877777Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:13:52.421039 containerd[1502]: time="2026-04-13T20:13:52.420930837Z" level=info msg="RemovePodSandbox \"5284baa1c7e027cc547064ab6d8e969cb9a346da9157c9cb5a740d3235ae8738\" returns successfully" Apr 13 20:13:52.421534 containerd[1502]: time="2026-04-13T20:13:52.421491727Z" level=info msg="StopPodSandbox for \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\"" Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.449 [WARNING][5662] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.449 [INFO][5662] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.449 [INFO][5662] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" iface="eth0" netns="" Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.449 [INFO][5662] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.449 [INFO][5662] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.465 [INFO][5669] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" HandleID="k8s-pod-network.4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.465 [INFO][5669] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.465 [INFO][5669] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.470 [WARNING][5669] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" HandleID="k8s-pod-network.4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.471 [INFO][5669] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" HandleID="k8s-pod-network.4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.472 [INFO][5669] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.479823 containerd[1502]: 2026-04-13 20:13:52.478 [INFO][5662] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:52.481144 containerd[1502]: time="2026-04-13T20:13:52.479862796Z" level=info msg="TearDown network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\" successfully" Apr 13 20:13:52.481144 containerd[1502]: time="2026-04-13T20:13:52.479890481Z" level=info msg="StopPodSandbox for \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\" returns successfully" Apr 13 20:13:52.481144 containerd[1502]: time="2026-04-13T20:13:52.480363825Z" level=info msg="RemovePodSandbox for \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\"" Apr 13 20:13:52.481144 containerd[1502]: time="2026-04-13T20:13:52.480386769Z" level=info msg="Forcibly stopping sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\"" Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.511 [WARNING][5688] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" WorkloadEndpoint="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.511 [INFO][5688] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.511 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" iface="eth0" netns="" Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.511 [INFO][5688] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.511 [INFO][5688] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.528 [INFO][5696] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" HandleID="k8s-pod-network.4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.528 [INFO][5696] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.528 [INFO][5696] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.533 [WARNING][5696] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" HandleID="k8s-pod-network.4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.533 [INFO][5696] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" HandleID="k8s-pod-network.4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Workload="ci--4081--3--7--c--b0ece174b2-k8s-whisker--f5f8f9646--r8pmz-eth0" Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.536 [INFO][5696] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.542688 containerd[1502]: 2026-04-13 20:13:52.539 [INFO][5688] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4" Apr 13 20:13:52.542688 containerd[1502]: time="2026-04-13T20:13:52.542372047Z" level=info msg="TearDown network for sandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\" successfully" Apr 13 20:13:52.548864 containerd[1502]: time="2026-04-13T20:13:52.548805572Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:13:52.548864 containerd[1502]: time="2026-04-13T20:13:52.548865632Z" level=info msg="RemovePodSandbox \"4a90e67ea8225bd2573c73399c274200e8afc1b6b4679ada5589da74579e76b4\" returns successfully" Apr 13 20:13:52.549278 containerd[1502]: time="2026-04-13T20:13:52.549219107Z" level=info msg="StopPodSandbox for \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\"" Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.582 [WARNING][5711] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2", Pod:"goldmane-5b85766d88-c6ngr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3b0227342b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.582 [INFO][5711] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.582 [INFO][5711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" iface="eth0" netns="" Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.582 [INFO][5711] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.582 [INFO][5711] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.598 [INFO][5718] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" HandleID="k8s-pod-network.eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.599 [INFO][5718] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.599 [INFO][5718] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.604 [WARNING][5718] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" HandleID="k8s-pod-network.eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.604 [INFO][5718] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" HandleID="k8s-pod-network.eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.605 [INFO][5718] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.608686 containerd[1502]: 2026-04-13 20:13:52.607 [INFO][5711] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:52.609067 containerd[1502]: time="2026-04-13T20:13:52.608727358Z" level=info msg="TearDown network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\" successfully" Apr 13 20:13:52.609067 containerd[1502]: time="2026-04-13T20:13:52.608770816Z" level=info msg="StopPodSandbox for \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\" returns successfully" Apr 13 20:13:52.609348 containerd[1502]: time="2026-04-13T20:13:52.609274467Z" level=info msg="RemovePodSandbox for \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\"" Apr 13 20:13:52.609388 containerd[1502]: time="2026-04-13T20:13:52.609352971Z" level=info msg="Forcibly stopping sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\"" Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.636 [WARNING][5732] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"487ea9ef-6892-48d7-a2fa-05a0f1fc06fa", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"a204574605902815f9820180cac7e63bc55cdcf57d2773524820542d03e8a0c2", Pod:"goldmane-5b85766d88-c6ngr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3b0227342b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.636 [INFO][5732] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.636 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" iface="eth0" netns="" Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.636 [INFO][5732] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.636 [INFO][5732] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.652 [INFO][5739] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" HandleID="k8s-pod-network.eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.652 [INFO][5739] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.652 [INFO][5739] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.657 [WARNING][5739] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" HandleID="k8s-pod-network.eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.657 [INFO][5739] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" HandleID="k8s-pod-network.eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Workload="ci--4081--3--7--c--b0ece174b2-k8s-goldmane--5b85766d88--c6ngr-eth0" Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.660 [INFO][5739] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.664628 containerd[1502]: 2026-04-13 20:13:52.661 [INFO][5732] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e" Apr 13 20:13:52.665460 containerd[1502]: time="2026-04-13T20:13:52.665011602Z" level=info msg="TearDown network for sandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\" successfully" Apr 13 20:13:52.669277 containerd[1502]: time="2026-04-13T20:13:52.669245533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:13:52.669371 containerd[1502]: time="2026-04-13T20:13:52.669292831Z" level=info msg="RemovePodSandbox \"eab87b8e965b1f501b350446e952cb087965a45b1271666ad82ac17bc2c0764e\" returns successfully" Apr 13 20:13:52.669705 containerd[1502]: time="2026-04-13T20:13:52.669683821Z" level=info msg="StopPodSandbox for \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\"" Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.699 [WARNING][5753] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0", GenerateName:"calico-kube-controllers-6c74bf58b8-", Namespace:"calico-system", SelfLink:"", UID:"a4e90b81-acc5-4fe5-b623-c177b554394d", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c74bf58b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552", Pod:"calico-kube-controllers-6c74bf58b8-pnfc5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc50e08f996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.699 [INFO][5753] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.699 [INFO][5753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" iface="eth0" netns="" Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.699 [INFO][5753] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.700 [INFO][5753] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.714 [INFO][5761] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" HandleID="k8s-pod-network.561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.714 [INFO][5761] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.714 [INFO][5761] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.720 [WARNING][5761] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" HandleID="k8s-pod-network.561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.720 [INFO][5761] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" HandleID="k8s-pod-network.561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.721 [INFO][5761] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.725154 containerd[1502]: 2026-04-13 20:13:52.723 [INFO][5753] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:52.725501 containerd[1502]: time="2026-04-13T20:13:52.725186875Z" level=info msg="TearDown network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\" successfully" Apr 13 20:13:52.725501 containerd[1502]: time="2026-04-13T20:13:52.725209019Z" level=info msg="StopPodSandbox for \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\" returns successfully" Apr 13 20:13:52.725778 containerd[1502]: time="2026-04-13T20:13:52.725760258Z" level=info msg="RemovePodSandbox for \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\"" Apr 13 20:13:52.725846 containerd[1502]: time="2026-04-13T20:13:52.725826710Z" level=info msg="Forcibly stopping sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\"" Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.753 [WARNING][5775] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0", GenerateName:"calico-kube-controllers-6c74bf58b8-", Namespace:"calico-system", SelfLink:"", UID:"a4e90b81-acc5-4fe5-b623-c177b554394d", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c74bf58b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"19c9498be7c8caad7c8edb1b6773e601cc9a15cc1cf83661e85133ec1ff91552", Pod:"calico-kube-controllers-6c74bf58b8-pnfc5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc50e08f996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.753 [INFO][5775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.753 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" iface="eth0" netns="" Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.753 [INFO][5775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.753 [INFO][5775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.769 [INFO][5782] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" HandleID="k8s-pod-network.561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.769 [INFO][5782] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.769 [INFO][5782] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.774 [WARNING][5782] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" HandleID="k8s-pod-network.561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.774 [INFO][5782] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" HandleID="k8s-pod-network.561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--kube--controllers--6c74bf58b8--pnfc5-eth0" Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.775 [INFO][5782] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.779733 containerd[1502]: 2026-04-13 20:13:52.777 [INFO][5775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce" Apr 13 20:13:52.780082 containerd[1502]: time="2026-04-13T20:13:52.779801849Z" level=info msg="TearDown network for sandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\" successfully" Apr 13 20:13:52.783933 containerd[1502]: time="2026-04-13T20:13:52.783820600Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:13:52.783933 containerd[1502]: time="2026-04-13T20:13:52.783896134Z" level=info msg="RemovePodSandbox \"561879f88c769ab8637c60eca9e9ef7d40c594b7d6c2ae279f60d8abb5606fce\" returns successfully" Apr 13 20:13:52.784365 containerd[1502]: time="2026-04-13T20:13:52.784350286Z" level=info msg="StopPodSandbox for \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\"" Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.814 [WARNING][5796] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bbf82d0e-deb2-40ec-a573-b417e42188bc", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79", Pod:"coredns-674b8bbfcf-dr5n9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aeda35f57f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.814 [INFO][5796] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.814 [INFO][5796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" iface="eth0" netns="" Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.814 [INFO][5796] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.814 [INFO][5796] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.836 [INFO][5804] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" HandleID="k8s-pod-network.628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.836 [INFO][5804] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.836 [INFO][5804] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.841 [WARNING][5804] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" HandleID="k8s-pod-network.628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.841 [INFO][5804] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" HandleID="k8s-pod-network.628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.842 [INFO][5804] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.846922 containerd[1502]: 2026-04-13 20:13:52.844 [INFO][5796] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:52.847317 containerd[1502]: time="2026-04-13T20:13:52.846946363Z" level=info msg="TearDown network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\" successfully" Apr 13 20:13:52.847317 containerd[1502]: time="2026-04-13T20:13:52.846972937Z" level=info msg="StopPodSandbox for \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\" returns successfully" Apr 13 20:13:52.847564 containerd[1502]: time="2026-04-13T20:13:52.847542319Z" level=info msg="RemovePodSandbox for \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\"" Apr 13 20:13:52.847589 containerd[1502]: time="2026-04-13T20:13:52.847570774Z" level=info msg="Forcibly stopping sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\"" Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.878 [WARNING][5818] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bbf82d0e-deb2-40ec-a573-b417e42188bc", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"6c0b119d880e19719a5dfba9f9cc3b92f107080a02068e4ed40d11b9d4d9aa79", Pod:"coredns-674b8bbfcf-dr5n9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aeda35f57f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.879 [INFO][5818] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.879 [INFO][5818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" iface="eth0" netns="" Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.879 [INFO][5818] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.879 [INFO][5818] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.895 [INFO][5825] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" HandleID="k8s-pod-network.628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.895 [INFO][5825] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.895 [INFO][5825] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.901 [WARNING][5825] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" HandleID="k8s-pod-network.628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.901 [INFO][5825] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" HandleID="k8s-pod-network.628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Workload="ci--4081--3--7--c--b0ece174b2-k8s-coredns--674b8bbfcf--dr5n9-eth0" Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.902 [INFO][5825] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.906541 containerd[1502]: 2026-04-13 20:13:52.904 [INFO][5818] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c" Apr 13 20:13:52.906883 containerd[1502]: time="2026-04-13T20:13:52.906572556Z" level=info msg="TearDown network for sandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\" successfully" Apr 13 20:13:52.910424 containerd[1502]: time="2026-04-13T20:13:52.910382921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:13:52.910515 containerd[1502]: time="2026-04-13T20:13:52.910431879Z" level=info msg="RemovePodSandbox \"628acb7ebd14d572e15976a08d1445d1a6e5c9985c7d1c47ac0f1273fdcc754c\" returns successfully" Apr 13 20:13:52.911112 containerd[1502]: time="2026-04-13T20:13:52.910880599Z" level=info msg="StopPodSandbox for \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\"" Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.942 [WARNING][5840] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0", GenerateName:"calico-apiserver-75d956895-", Namespace:"calico-system", SelfLink:"", UID:"5925fe08-2006-453b-ae7e-b5697562e697", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d956895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0", Pod:"calico-apiserver-75d956895-42kqd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03063751a48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.942 [INFO][5840] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.942 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" iface="eth0" netns="" Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.942 [INFO][5840] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.942 [INFO][5840] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.959 [INFO][5847] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" HandleID="k8s-pod-network.4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.960 [INFO][5847] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.960 [INFO][5847] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.964 [WARNING][5847] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" HandleID="k8s-pod-network.4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.964 [INFO][5847] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" HandleID="k8s-pod-network.4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.966 [INFO][5847] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:52.970071 containerd[1502]: 2026-04-13 20:13:52.968 [INFO][5840] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:52.970071 containerd[1502]: time="2026-04-13T20:13:52.969918777Z" level=info msg="TearDown network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\" successfully" Apr 13 20:13:52.970071 containerd[1502]: time="2026-04-13T20:13:52.969945232Z" level=info msg="StopPodSandbox for \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\" returns successfully" Apr 13 20:13:52.970664 containerd[1502]: time="2026-04-13T20:13:52.970342124Z" level=info msg="RemovePodSandbox for \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\"" Apr 13 20:13:52.970664 containerd[1502]: time="2026-04-13T20:13:52.970362388Z" level=info msg="Forcibly stopping sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\"" Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:52.999 [WARNING][5861] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0", GenerateName:"calico-apiserver-75d956895-", Namespace:"calico-system", SelfLink:"", UID:"5925fe08-2006-453b-ae7e-b5697562e697", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d956895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"51cb548dd04d812c0313749b9327c799eef62fa3285c949b9b2aab6354ec79c0", Pod:"calico-apiserver-75d956895-42kqd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali03063751a48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.000 [INFO][5861] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.000 [INFO][5861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" iface="eth0" netns="" Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.000 [INFO][5861] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.000 [INFO][5861] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.016 [INFO][5868] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" HandleID="k8s-pod-network.4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.016 [INFO][5868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.017 [INFO][5868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.021 [WARNING][5868] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" HandleID="k8s-pod-network.4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.021 [INFO][5868] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" HandleID="k8s-pod-network.4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--42kqd-eth0" Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.022 [INFO][5868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:53.027056 containerd[1502]: 2026-04-13 20:13:53.024 [INFO][5861] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859" Apr 13 20:13:53.027393 containerd[1502]: time="2026-04-13T20:13:53.027101088Z" level=info msg="TearDown network for sandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\" successfully" Apr 13 20:13:53.031685 containerd[1502]: time="2026-04-13T20:13:53.031581374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:13:53.031685 containerd[1502]: time="2026-04-13T20:13:53.031647916Z" level=info msg="RemovePodSandbox \"4b7cfa02ecdc4548840653279c9dcd07dc6ce523ff591bc2ec86e9e20832f859\" returns successfully" Apr 13 20:13:53.032689 containerd[1502]: time="2026-04-13T20:13:53.032234465Z" level=info msg="StopPodSandbox for \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\"" Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.058 [WARNING][5883] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"484da9bd-407d-408c-b0d2-a512d2d9a654", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78", Pod:"csi-node-driver-hvr8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5576905c19d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.058 [INFO][5883] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.058 [INFO][5883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" iface="eth0" netns="" Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.058 [INFO][5883] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.058 [INFO][5883] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.073 [INFO][5890] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" HandleID="k8s-pod-network.eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.073 [INFO][5890] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.073 [INFO][5890] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.077 [WARNING][5890] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" HandleID="k8s-pod-network.eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.077 [INFO][5890] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" HandleID="k8s-pod-network.eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.078 [INFO][5890] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:53.082166 containerd[1502]: 2026-04-13 20:13:53.080 [INFO][5883] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:53.082639 containerd[1502]: time="2026-04-13T20:13:53.082199339Z" level=info msg="TearDown network for sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\" successfully" Apr 13 20:13:53.082639 containerd[1502]: time="2026-04-13T20:13:53.082219932Z" level=info msg="StopPodSandbox for \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\" returns successfully" Apr 13 20:13:53.083025 containerd[1502]: time="2026-04-13T20:13:53.082804122Z" level=info msg="RemovePodSandbox for \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\"" Apr 13 20:13:53.083025 containerd[1502]: time="2026-04-13T20:13:53.082827196Z" level=info msg="Forcibly stopping sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\"" Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.112 [WARNING][5905] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"484da9bd-407d-408c-b0d2-a512d2d9a654", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"6989ed922555dad2c2b8d2f4f5448f42f20a0dd8fca06d1a83b84893c8debc78", Pod:"csi-node-driver-hvr8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5576905c19d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.113 [INFO][5905] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.113 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" iface="eth0" netns="" Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.113 [INFO][5905] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.113 [INFO][5905] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.130 [INFO][5913] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" HandleID="k8s-pod-network.eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.131 [INFO][5913] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.131 [INFO][5913] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.135 [WARNING][5913] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" HandleID="k8s-pod-network.eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.135 [INFO][5913] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" HandleID="k8s-pod-network.eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Workload="ci--4081--3--7--c--b0ece174b2-k8s-csi--node--driver--hvr8v-eth0" Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.137 [INFO][5913] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:53.141337 containerd[1502]: 2026-04-13 20:13:53.139 [INFO][5905] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd" Apr 13 20:13:53.141337 containerd[1502]: time="2026-04-13T20:13:53.141318027Z" level=info msg="TearDown network for sandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\" successfully" Apr 13 20:13:53.146388 containerd[1502]: time="2026-04-13T20:13:53.146329883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:13:53.146388 containerd[1502]: time="2026-04-13T20:13:53.146390064Z" level=info msg="RemovePodSandbox \"eb61471c0fb2b21f7ecd3aae7d5cb935a88f2e30aaa1515232cd216499a5c5dd\" returns successfully" Apr 13 20:13:53.146888 containerd[1502]: time="2026-04-13T20:13:53.146865946Z" level=info msg="StopPodSandbox for \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\"" Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.175 [WARNING][5928] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0", GenerateName:"calico-apiserver-75d956895-", Namespace:"calico-system", SelfLink:"", UID:"1c1bb0e0-ee02-473c-a263-cdfa973e52e6", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d956895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda", Pod:"calico-apiserver-75d956895-66fp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56495e50342", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.175 [INFO][5928] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.175 [INFO][5928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" iface="eth0" netns="" Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.175 [INFO][5928] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.175 [INFO][5928] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.192 [INFO][5935] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" HandleID="k8s-pod-network.27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.192 [INFO][5935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.192 [INFO][5935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.198 [WARNING][5935] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" HandleID="k8s-pod-network.27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.198 [INFO][5935] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" HandleID="k8s-pod-network.27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.199 [INFO][5935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:53.203505 containerd[1502]: 2026-04-13 20:13:53.201 [INFO][5928] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:53.203866 containerd[1502]: time="2026-04-13T20:13:53.203548207Z" level=info msg="TearDown network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\" successfully" Apr 13 20:13:53.203866 containerd[1502]: time="2026-04-13T20:13:53.203569841Z" level=info msg="StopPodSandbox for \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\" returns successfully" Apr 13 20:13:53.204196 containerd[1502]: time="2026-04-13T20:13:53.204172783Z" level=info msg="RemovePodSandbox for \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\"" Apr 13 20:13:53.204228 containerd[1502]: time="2026-04-13T20:13:53.204200188Z" level=info msg="Forcibly stopping sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\"" Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.233 [WARNING][5950] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0", GenerateName:"calico-apiserver-75d956895-", Namespace:"calico-system", SelfLink:"", UID:"1c1bb0e0-ee02-473c-a263-cdfa973e52e6", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d956895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-c-b0ece174b2", ContainerID:"cb5f4e39332935303a8e3ea236899f5d3a0cae714e000ee7e434934280196bda", Pod:"calico-apiserver-75d956895-66fp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali56495e50342", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.234 [INFO][5950] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.234 [INFO][5950] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" iface="eth0" netns="" Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.234 [INFO][5950] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.234 [INFO][5950] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.249 [INFO][5957] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" HandleID="k8s-pod-network.27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.250 [INFO][5957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.250 [INFO][5957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.254 [WARNING][5957] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" HandleID="k8s-pod-network.27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.254 [INFO][5957] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" HandleID="k8s-pod-network.27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Workload="ci--4081--3--7--c--b0ece174b2-k8s-calico--apiserver--75d956895--66fp9-eth0" Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.255 [INFO][5957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:13:53.259054 containerd[1502]: 2026-04-13 20:13:53.257 [INFO][5950] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0" Apr 13 20:13:53.261242 containerd[1502]: time="2026-04-13T20:13:53.259089443Z" level=info msg="TearDown network for sandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\" successfully" Apr 13 20:13:53.264988 containerd[1502]: time="2026-04-13T20:13:53.264950565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:13:53.265065 containerd[1502]: time="2026-04-13T20:13:53.265008965Z" level=info msg="RemovePodSandbox \"27a56beff9d3e190d1c24e9692b18b57ed766cba0adc35521b153af45a7f1ad0\" returns successfully" Apr 13 20:14:15.614174 systemd[1]: run-containerd-runc-k8s.io-7be07a5fa7498d2fbe98d58aca659461b4a36e18a8c9b9741cb88d33378695b3-runc.EG4ivg.mount: Deactivated successfully. Apr 13 20:14:17.579218 systemd[1]: Started sshd@7-204.168.241.7:22-20.229.252.112:44562.service - OpenSSH per-connection server daemon (20.229.252.112:44562). Apr 13 20:14:17.792803 sshd[6095]: Accepted publickey for core from 20.229.252.112 port 44562 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:17.795577 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:17.803887 systemd-logind[1489]: New session 8 of user core. Apr 13 20:14:17.813024 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:14:18.093435 sshd[6095]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:18.097572 systemd[1]: sshd@7-204.168.241.7:22-20.229.252.112:44562.service: Deactivated successfully. Apr 13 20:14:18.101690 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:14:18.104822 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:14:18.107225 systemd-logind[1489]: Removed session 8. Apr 13 20:14:23.134672 systemd[1]: Started sshd@8-204.168.241.7:22-20.229.252.112:44578.service - OpenSSH per-connection server daemon (20.229.252.112:44578). Apr 13 20:14:23.335723 sshd[6125]: Accepted publickey for core from 20.229.252.112 port 44578 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:23.338625 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:23.346322 systemd-logind[1489]: New session 9 of user core. Apr 13 20:14:23.352012 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:14:23.608605 sshd[6125]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:23.614435 systemd[1]: sshd@8-204.168.241.7:22-20.229.252.112:44578.service: Deactivated successfully. Apr 13 20:14:23.618535 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:14:23.619817 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:14:23.621194 systemd-logind[1489]: Removed session 9. Apr 13 20:14:28.660184 systemd[1]: Started sshd@9-204.168.241.7:22-20.229.252.112:56514.service - OpenSSH per-connection server daemon (20.229.252.112:56514). Apr 13 20:14:28.888115 sshd[6162]: Accepted publickey for core from 20.229.252.112 port 56514 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:28.889950 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:28.896667 systemd-logind[1489]: New session 10 of user core. Apr 13 20:14:28.904977 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:14:29.123878 sshd[6162]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:29.129062 systemd-logind[1489]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:14:29.130107 systemd[1]: sshd@9-204.168.241.7:22-20.229.252.112:56514.service: Deactivated successfully. Apr 13 20:14:29.133554 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:14:29.134619 systemd-logind[1489]: Removed session 10. Apr 13 20:14:34.172152 systemd[1]: Started sshd@10-204.168.241.7:22-20.229.252.112:56528.service - OpenSSH per-connection server daemon (20.229.252.112:56528). Apr 13 20:14:34.407809 sshd[6194]: Accepted publickey for core from 20.229.252.112 port 56528 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:34.409350 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:34.418220 systemd-logind[1489]: New session 11 of user core. Apr 13 20:14:34.421969 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:14:34.683978 sshd[6194]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:34.687031 systemd-logind[1489]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:14:34.688074 systemd[1]: sshd@10-204.168.241.7:22-20.229.252.112:56528.service: Deactivated successfully. Apr 13 20:14:34.691397 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:14:34.694400 systemd-logind[1489]: Removed session 11. Apr 13 20:14:34.727143 systemd[1]: Started sshd@11-204.168.241.7:22-20.229.252.112:56538.service - OpenSSH per-connection server daemon (20.229.252.112:56538). Apr 13 20:14:34.930887 sshd[6208]: Accepted publickey for core from 20.229.252.112 port 56538 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:34.934619 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:34.943918 systemd-logind[1489]: New session 12 of user core. Apr 13 20:14:34.949991 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:14:35.196866 sshd[6208]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:35.200709 systemd[1]: sshd@11-204.168.241.7:22-20.229.252.112:56538.service: Deactivated successfully. Apr 13 20:14:35.202900 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:14:35.204594 systemd-logind[1489]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:14:35.205626 systemd-logind[1489]: Removed session 12. Apr 13 20:14:35.238988 systemd[1]: Started sshd@12-204.168.241.7:22-20.229.252.112:44848.service - OpenSSH per-connection server daemon (20.229.252.112:44848). Apr 13 20:14:35.442091 sshd[6219]: Accepted publickey for core from 20.229.252.112 port 44848 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:35.446094 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:35.456363 systemd-logind[1489]: New session 13 of user core. Apr 13 20:14:35.467017 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:14:35.739120 sshd[6219]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:35.742270 systemd[1]: sshd@12-204.168.241.7:22-20.229.252.112:44848.service: Deactivated successfully. Apr 13 20:14:35.742498 systemd-logind[1489]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:14:35.744136 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:14:35.745009 systemd-logind[1489]: Removed session 13. Apr 13 20:14:40.779219 systemd[1]: Started sshd@13-204.168.241.7:22-20.229.252.112:44860.service - OpenSSH per-connection server daemon (20.229.252.112:44860). Apr 13 20:14:40.997801 sshd[6232]: Accepted publickey for core from 20.229.252.112 port 44860 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:41.000227 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:41.008087 systemd-logind[1489]: New session 14 of user core. Apr 13 20:14:41.014083 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:14:41.224786 sshd[6232]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:41.228757 systemd-logind[1489]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:14:41.229345 systemd[1]: sshd@13-204.168.241.7:22-20.229.252.112:44860.service: Deactivated successfully. Apr 13 20:14:41.231516 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:14:41.232441 systemd-logind[1489]: Removed session 14. Apr 13 20:14:46.265223 systemd[1]: Started sshd@14-204.168.241.7:22-20.229.252.112:59942.service - OpenSSH per-connection server daemon (20.229.252.112:59942). Apr 13 20:14:46.471079 sshd[6284]: Accepted publickey for core from 20.229.252.112 port 59942 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:46.474193 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:46.483995 systemd-logind[1489]: New session 15 of user core. Apr 13 20:14:46.492013 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:14:46.726103 sshd[6284]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:46.733305 systemd-logind[1489]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:14:46.734254 systemd[1]: sshd@14-204.168.241.7:22-20.229.252.112:59942.service: Deactivated successfully. Apr 13 20:14:46.738529 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:14:46.739955 systemd-logind[1489]: Removed session 15. Apr 13 20:14:46.773055 systemd[1]: Started sshd@15-204.168.241.7:22-20.229.252.112:59948.service - OpenSSH per-connection server daemon (20.229.252.112:59948). Apr 13 20:14:46.997147 sshd[6298]: Accepted publickey for core from 20.229.252.112 port 59948 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:46.999736 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:47.007733 systemd-logind[1489]: New session 16 of user core. Apr 13 20:14:47.014002 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:14:47.486255 sshd[6298]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:47.490474 systemd[1]: sshd@15-204.168.241.7:22-20.229.252.112:59948.service: Deactivated successfully. Apr 13 20:14:47.490528 systemd-logind[1489]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:14:47.492693 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:14:47.493931 systemd-logind[1489]: Removed session 16. Apr 13 20:14:47.539108 systemd[1]: Started sshd@16-204.168.241.7:22-20.229.252.112:59950.service - OpenSSH per-connection server daemon (20.229.252.112:59950). Apr 13 20:14:47.744149 sshd[6310]: Accepted publickey for core from 20.229.252.112 port 59950 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:47.746951 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:47.754852 systemd-logind[1489]: New session 17 of user core. Apr 13 20:14:47.762979 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:14:48.430182 sshd[6310]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:48.433691 systemd-logind[1489]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:14:48.434665 systemd[1]: sshd@16-204.168.241.7:22-20.229.252.112:59950.service: Deactivated successfully. Apr 13 20:14:48.437434 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:14:48.439173 systemd-logind[1489]: Removed session 17. Apr 13 20:14:48.465639 systemd[1]: Started sshd@17-204.168.241.7:22-20.229.252.112:59966.service - OpenSSH per-connection server daemon (20.229.252.112:59966). Apr 13 20:14:48.684849 sshd[6336]: Accepted publickey for core from 20.229.252.112 port 59966 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:48.686158 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:48.696912 systemd-logind[1489]: New session 18 of user core. Apr 13 20:14:48.706031 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:14:49.056114 sshd[6336]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:49.059933 systemd-logind[1489]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:14:49.060650 systemd[1]: sshd@17-204.168.241.7:22-20.229.252.112:59966.service: Deactivated successfully. Apr 13 20:14:49.062589 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:14:49.063644 systemd-logind[1489]: Removed session 18. Apr 13 20:14:49.098686 systemd[1]: Started sshd@18-204.168.241.7:22-20.229.252.112:59982.service - OpenSSH per-connection server daemon (20.229.252.112:59982). Apr 13 20:14:49.311604 sshd[6347]: Accepted publickey for core from 20.229.252.112 port 59982 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:49.314815 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:49.323834 systemd-logind[1489]: New session 19 of user core. Apr 13 20:14:49.332484 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:14:49.556049 sshd[6347]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:49.561572 systemd[1]: sshd@18-204.168.241.7:22-20.229.252.112:59982.service: Deactivated successfully. Apr 13 20:14:49.565279 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:14:49.569411 systemd-logind[1489]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:14:49.570886 systemd-logind[1489]: Removed session 19. Apr 13 20:14:54.612459 systemd[1]: Started sshd@19-204.168.241.7:22-20.229.252.112:59998.service - OpenSSH per-connection server daemon (20.229.252.112:59998). Apr 13 20:14:54.819615 sshd[6364]: Accepted publickey for core from 20.229.252.112 port 59998 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:14:54.822474 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:14:54.830401 systemd-logind[1489]: New session 20 of user core. Apr 13 20:14:54.840998 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:14:55.086075 sshd[6364]: pam_unix(sshd:session): session closed for user core Apr 13 20:14:55.090667 systemd-logind[1489]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:14:55.091628 systemd[1]: sshd@19-204.168.241.7:22-20.229.252.112:59998.service: Deactivated successfully. Apr 13 20:14:55.093627 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:14:55.094923 systemd-logind[1489]: Removed session 20. Apr 13 20:15:00.131047 systemd[1]: Started sshd@20-204.168.241.7:22-20.229.252.112:55686.service - OpenSSH per-connection server daemon (20.229.252.112:55686). Apr 13 20:15:00.327221 sshd[6401]: Accepted publickey for core from 20.229.252.112 port 55686 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:15:00.328756 sshd[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:15:00.332273 systemd-logind[1489]: New session 21 of user core. Apr 13 20:15:00.337936 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:15:00.552080 sshd[6401]: pam_unix(sshd:session): session closed for user core Apr 13 20:15:00.554851 systemd-logind[1489]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:15:00.555712 systemd[1]: sshd@20-204.168.241.7:22-20.229.252.112:55686.service: Deactivated successfully. Apr 13 20:15:00.558292 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:15:00.562423 systemd-logind[1489]: Removed session 21. Apr 13 20:15:16.570309 systemd[1]: cri-containerd-441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01.scope: Deactivated successfully. Apr 13 20:15:16.571218 systemd[1]: cri-containerd-441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01.scope: Consumed 8.758s CPU time. Apr 13 20:15:16.611550 containerd[1502]: time="2026-04-13T20:15:16.609840322Z" level=info msg="shim disconnected" id=441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01 namespace=k8s.io Apr 13 20:15:16.612376 containerd[1502]: time="2026-04-13T20:15:16.612141708Z" level=warning msg="cleaning up after shim disconnected" id=441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01 namespace=k8s.io Apr 13 20:15:16.612376 containerd[1502]: time="2026-04-13T20:15:16.612172919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:15:16.616507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01-rootfs.mount: Deactivated successfully. Apr 13 20:15:16.628579 containerd[1502]: time="2026-04-13T20:15:16.628538051Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:15:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:15:16.796110 kubelet[2564]: I0413 20:15:16.796050 2564 scope.go:117] "RemoveContainer" containerID="441fd16c52a866043ad62b39c27debff321f4c32313e8f4c51ee30872d89de01" Apr 13 20:15:16.801198 containerd[1502]: time="2026-04-13T20:15:16.800791109Z" level=info msg="CreateContainer within sandbox \"a96145f236f2e03d6ffed153f2ad94f530da629ddee49260b7498130d8d4f7a5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 20:15:16.811680 containerd[1502]: time="2026-04-13T20:15:16.811473666Z" level=info msg="CreateContainer within sandbox \"a96145f236f2e03d6ffed153f2ad94f530da629ddee49260b7498130d8d4f7a5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d8c7c220a54f8659746b9c4235fdf7f7754e0ec1c7d00f5e41e021784f5ad593\"" Apr 13 20:15:16.813763 containerd[1502]: time="2026-04-13T20:15:16.813147711Z" level=info msg="StartContainer for \"d8c7c220a54f8659746b9c4235fdf7f7754e0ec1c7d00f5e41e021784f5ad593\"" Apr 13 20:15:16.849886 systemd[1]: Started cri-containerd-d8c7c220a54f8659746b9c4235fdf7f7754e0ec1c7d00f5e41e021784f5ad593.scope - libcontainer container d8c7c220a54f8659746b9c4235fdf7f7754e0ec1c7d00f5e41e021784f5ad593. Apr 13 20:15:16.872091 containerd[1502]: time="2026-04-13T20:15:16.872039241Z" level=info msg="StartContainer for \"d8c7c220a54f8659746b9c4235fdf7f7754e0ec1c7d00f5e41e021784f5ad593\" returns successfully" Apr 13 20:15:17.029276 kubelet[2564]: E0413 20:15:17.029193 2564 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35844->10.0.0.2:2379: read: connection timed out" Apr 13 20:15:17.648916 systemd[1]: cri-containerd-4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551.scope: Deactivated successfully. Apr 13 20:15:17.649411 systemd[1]: cri-containerd-4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551.scope: Consumed 3.714s CPU time, 18.2M memory peak, 0B memory swap peak. Apr 13 20:15:17.692150 containerd[1502]: time="2026-04-13T20:15:17.691781169Z" level=info msg="shim disconnected" id=4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551 namespace=k8s.io Apr 13 20:15:17.692150 containerd[1502]: time="2026-04-13T20:15:17.691847309Z" level=warning msg="cleaning up after shim disconnected" id=4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551 namespace=k8s.io Apr 13 20:15:17.692150 containerd[1502]: time="2026-04-13T20:15:17.691882779Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:15:17.701459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551-rootfs.mount: Deactivated successfully. Apr 13 20:15:17.799488 kubelet[2564]: I0413 20:15:17.799411 2564 scope.go:117] "RemoveContainer" containerID="4a93c31be297cdf8998c5dd4f2165d28e2cbbe38861cdce8cb3ab17fe5b0c551" Apr 13 20:15:17.804653 containerd[1502]: time="2026-04-13T20:15:17.804174069Z" level=info msg="CreateContainer within sandbox \"841c82ef5d9120783d621631c48583704df8dcb43cc4f8a3b7f5ef4da0fd9e2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 20:15:17.821248 containerd[1502]: time="2026-04-13T20:15:17.821185357Z" level=info msg="CreateContainer within sandbox \"841c82ef5d9120783d621631c48583704df8dcb43cc4f8a3b7f5ef4da0fd9e2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"87387ebbfba0f380cd8fd1bf505e0f634a2048ddb5a704cdb8c9036140d52d3e\"" Apr 13 20:15:17.823331 containerd[1502]: time="2026-04-13T20:15:17.821926590Z" level=info msg="StartContainer for \"87387ebbfba0f380cd8fd1bf505e0f634a2048ddb5a704cdb8c9036140d52d3e\"" Apr 13 20:15:17.872863 systemd[1]: Started cri-containerd-87387ebbfba0f380cd8fd1bf505e0f634a2048ddb5a704cdb8c9036140d52d3e.scope - libcontainer container 87387ebbfba0f380cd8fd1bf505e0f634a2048ddb5a704cdb8c9036140d52d3e. Apr 13 20:15:17.915427 containerd[1502]: time="2026-04-13T20:15:17.915085824Z" level=info msg="StartContainer for \"87387ebbfba0f380cd8fd1bf505e0f634a2048ddb5a704cdb8c9036140d52d3e\" returns successfully" Apr 13 20:15:21.124417 kubelet[2564]: E0413 20:15:21.122237 2564 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:35658->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-c-b0ece174b2.18a603d3aba5f1f9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-c-b0ece174b2,UID:3390f7752f26c3f07520bdf31500b229,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-c-b0ece174b2,},FirstTimestamp:2026-04-13 20:15:10.697001465 +0000 UTC m=+138.516418171,LastTimestamp:2026-04-13 20:15:10.697001465 +0000 UTC m=+138.516418171,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-c-b0ece174b2,}"