Apr 17 23:37:02.112200 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:37:02.112268 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:02.112286 kernel: BIOS-provided physical RAM map: Apr 17 23:37:02.112300 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 17 23:37:02.112314 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 17 23:37:02.112328 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 17 23:37:02.112344 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 17 23:37:02.112363 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 17 23:37:02.112377 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 17 23:37:02.112391 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 17 23:37:02.112406 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 17 23:37:02.112420 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 17 23:37:02.112433 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 17 23:37:02.112447 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 17 23:37:02.112469 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 17 23:37:02.112486 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 17 23:37:02.112502 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 17 23:37:02.112518 kernel: NX (Execute Disable) protection: active Apr 17 23:37:02.112532 kernel: APIC: Static calls initialized Apr 17 23:37:02.112549 kernel: efi: EFI v2.7 by EDK II Apr 17 23:37:02.112566 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd300018 Apr 17 23:37:02.112583 kernel: SMBIOS 2.4 present. Apr 17 23:37:02.112600 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026 Apr 17 23:37:02.112616 kernel: Hypervisor detected: KVM Apr 17 23:37:02.112637 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:37:02.112654 kernel: kvm-clock: using sched offset of 13357809649 cycles Apr 17 23:37:02.112669 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:37:02.112685 kernel: tsc: Detected 2299.998 MHz processor Apr 17 23:37:02.112702 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:37:02.112720 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:37:02.112737 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 17 23:37:02.112754 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 17 23:37:02.112770 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:37:02.112791 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 17 23:37:02.112807 kernel: Using GB pages for direct mapping Apr 17 23:37:02.112824 kernel: Secure boot disabled Apr 17 23:37:02.112839 kernel: ACPI: Early table checksum verification disabled Apr 17 23:37:02.112856 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 17 23:37:02.112873 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 17 23:37:02.112889 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 17 23:37:02.112913 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 17 23:37:02.112957 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 17 23:37:02.112976 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250807) Apr 17 23:37:02.112994 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 17 23:37:02.113012 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 17 23:37:02.113030 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 17 23:37:02.113096 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 17 23:37:02.113120 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 17 23:37:02.113139 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 17 23:37:02.113157 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 17 23:37:02.113175 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 17 23:37:02.113194 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 17 23:37:02.113211 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 17 23:37:02.113229 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 17 23:37:02.113246 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 17 23:37:02.113265 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 17 23:37:02.113287 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 17 23:37:02.113305 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:37:02.113323 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:37:02.113341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 17 23:37:02.113359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 17 23:37:02.113378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 17 23:37:02.113396 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 17 23:37:02.113415 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 17 23:37:02.113433 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Apr 17 23:37:02.113455 kernel: Zone ranges: Apr 17 23:37:02.113474 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:37:02.113492 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:37:02.113511 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 17 23:37:02.113529 kernel: Movable zone start for each node Apr 17 23:37:02.113547 kernel: Early memory node ranges Apr 17 23:37:02.113565 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 17 23:37:02.113583 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 17 23:37:02.113602 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 17 23:37:02.113619 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 17 23:37:02.113642 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 17 23:37:02.113661 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 17 23:37:02.113679 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:37:02.113695 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 17 23:37:02.113713 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 17 23:37:02.113731 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 17 23:37:02.113750 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 17 23:37:02.113768 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:37:02.113786 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:37:02.113809 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:37:02.113828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:37:02.113846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:37:02.113865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:37:02.113883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:37:02.113901 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:37:02.113919 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:37:02.113944 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 17 23:37:02.113962 kernel: Booting paravirtualized kernel on KVM Apr 17 23:37:02.113985 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:37:02.114004 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:37:02.114021 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:37:02.114040 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:37:02.114077 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:37:02.114094 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:37:02.114109 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:37:02.114126 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:02.114148 kernel: random: crng init done Apr 17 23:37:02.114164 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 17 23:37:02.114180 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:37:02.114196 kernel: Fallback order for Node 0: 0 Apr 17 23:37:02.114212 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 17 23:37:02.114228 kernel: Policy zone: Normal Apr 17 23:37:02.114245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:37:02.114262 kernel: software IO TLB: area num 2. Apr 17 23:37:02.114279 kernel: Memory: 7513248K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347076K reserved, 0K cma-reserved) Apr 17 23:37:02.114299 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:37:02.114316 kernel: Kernel/User page tables isolation: enabled Apr 17 23:37:02.114332 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:37:02.114349 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:37:02.114366 kernel: Dynamic Preempt: voluntary Apr 17 23:37:02.114383 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:37:02.114402 kernel: rcu: RCU event tracing is enabled. Apr 17 23:37:02.114419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:37:02.114454 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:37:02.114472 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:37:02.114491 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:37:02.114509 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:37:02.114531 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:37:02.114549 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:37:02.114567 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:37:02.114586 kernel: Console: colour dummy device 80x25 Apr 17 23:37:02.114608 kernel: printk: console [ttyS0] enabled Apr 17 23:37:02.114624 kernel: ACPI: Core revision 20230628 Apr 17 23:37:02.114640 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:37:02.114659 kernel: x2apic enabled Apr 17 23:37:02.114677 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:37:02.114695 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 17 23:37:02.114712 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 17 23:37:02.114730 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 17 23:37:02.114749 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 17 23:37:02.114773 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 17 23:37:02.114792 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:37:02.114808 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 17 23:37:02.114825 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 17 23:37:02.114844 kernel: Spectre V2 : Mitigation: IBRS Apr 17 23:37:02.114862 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:37:02.114878 kernel: RETBleed: Mitigation: IBRS Apr 17 23:37:02.114896 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 23:37:02.114914 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 17 23:37:02.114947 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 23:37:02.114965 kernel: MDS: Mitigation: Clear CPU buffers Apr 17 23:37:02.114983 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:37:02.115001 kernel: active return thunk: its_return_thunk Apr 17 23:37:02.115019 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:37:02.115036 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:37:02.115091 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:37:02.115112 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:37:02.115131 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:37:02.115154 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 17 23:37:02.115172 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:37:02.115191 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:37:02.115210 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:37:02.115228 kernel: landlock: Up and running. Apr 17 23:37:02.115246 kernel: SELinux: Initializing. Apr 17 23:37:02.115266 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:37:02.115284 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:37:02.115304 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 17 23:37:02.115328 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:02.115347 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:02.115363 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:02.115381 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 17 23:37:02.115401 kernel: signal: max sigframe size: 1776 Apr 17 23:37:02.115420 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:37:02.115441 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:37:02.115461 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:37:02.115480 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:37:02.115504 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:37:02.115523 kernel: .... node #0, CPUs: #1 Apr 17 23:37:02.115542 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:37:02.115563 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:37:02.115583 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:37:02.115602 kernel: smpboot: Max logical packages: 1 Apr 17 23:37:02.115622 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 17 23:37:02.115642 kernel: devtmpfs: initialized Apr 17 23:37:02.115666 kernel: x86/mm: Memory block size: 128MB Apr 17 23:37:02.115686 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 17 23:37:02.115705 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:37:02.115725 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:37:02.115745 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:37:02.115765 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:37:02.115784 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:37:02.115803 kernel: audit: type=2000 audit(1776469020.282:1): state=initialized audit_enabled=0 res=1 Apr 17 23:37:02.115822 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:37:02.115845 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:37:02.115864 kernel: cpuidle: using governor menu Apr 17 23:37:02.115884 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:37:02.115904 kernel: dca service started, version 1.12.1 Apr 17 23:37:02.115923 kernel: PCI: Using configuration type 1 for base access Apr 17 23:37:02.115950 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:37:02.115971 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:37:02.115990 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:37:02.116010 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:37:02.116034 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:37:02.116068 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:37:02.116088 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:37:02.116108 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:37:02.116128 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:37:02.116148 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:37:02.116167 kernel: ACPI: Interpreter enabled Apr 17 23:37:02.116188 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:37:02.116207 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:37:02.116227 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:37:02.116252 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 17 23:37:02.116272 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 17 23:37:02.116292 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:37:02.116592 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:37:02.116801 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:37:02.117005 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:37:02.117030 kernel: PCI host bridge to bus 0000:00 Apr 17 23:37:02.117272 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:37:02.117455 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:37:02.117631 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:37:02.117807 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 17 23:37:02.117994 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:37:02.118253 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:37:02.118475 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 17 23:37:02.118679 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 17 23:37:02.118877 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:37:02.119109 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 17 23:37:02.119309 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 17 23:37:02.119503 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 17 23:37:02.119704 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:37:02.119907 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 17 23:37:02.120134 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 17 23:37:02.120344 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:37:02.120541 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 17 23:37:02.120742 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 17 23:37:02.120768 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:37:02.120787 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:37:02.120812 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:37:02.120831 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:37:02.120851 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:37:02.120871 kernel: iommu: Default domain type: Translated Apr 17 23:37:02.120887 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:37:02.120904 kernel: efivars: Registered efivars operations Apr 17 23:37:02.120921 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:37:02.120949 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:37:02.120968 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 17 23:37:02.120991 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 17 23:37:02.121010 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 17 23:37:02.121028 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 17 23:37:02.121096 kernel: vgaarb: loaded Apr 17 23:37:02.121115 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:37:02.121135 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:37:02.121153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:37:02.121171 kernel: pnp: PnP ACPI init Apr 17 23:37:02.121190 kernel: pnp: PnP ACPI: found 7 devices Apr 17 23:37:02.121215 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:37:02.121234 kernel: NET: Registered PF_INET protocol family Apr 17 23:37:02.121252 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:37:02.121272 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 17 23:37:02.121290 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:37:02.121307 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:37:02.121323 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 17 23:37:02.121343 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 17 23:37:02.121370 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:37:02.121390 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:37:02.121411 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:37:02.121431 kernel: NET: Registered PF_XDP protocol family Apr 17 23:37:02.121703 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:37:02.121896 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:37:02.122117 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:37:02.122294 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 17 23:37:02.122508 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:37:02.122532 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:37:02.122551 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:37:02.122570 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 17 23:37:02.122589 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:37:02.122608 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 17 23:37:02.122626 kernel: clocksource: Switched to clocksource tsc Apr 17 23:37:02.122645 kernel: Initialise system trusted keyrings Apr 17 23:37:02.122669 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 17 23:37:02.122687 kernel: Key type asymmetric registered Apr 17 23:37:02.122705 kernel: Asymmetric key parser 'x509' registered Apr 17 23:37:02.122723 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:37:02.122742 kernel: io scheduler mq-deadline registered Apr 17 23:37:02.122761 kernel: io scheduler kyber registered Apr 17 23:37:02.122779 kernel: io scheduler bfq registered Apr 17 23:37:02.122798 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:37:02.122817 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 17 23:37:02.123014 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 17 23:37:02.123038 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 17 23:37:02.123252 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 17 23:37:02.123276 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 17 23:37:02.123457 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 17 23:37:02.123480 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:37:02.123499 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:37:02.123518 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 17 23:37:02.123536 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 17 23:37:02.123560 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 17 23:37:02.123754 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 17 23:37:02.123779 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:37:02.123798 kernel: i8042: Warning: Keylock active Apr 17 23:37:02.123815 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:37:02.123834 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:37:02.124035 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:37:02.124270 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:37:02.124445 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:37:01 UTC (1776469021) Apr 17 23:37:02.124613 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:37:02.124635 kernel: intel_pstate: CPU model not supported Apr 17 23:37:02.124654 kernel: pstore: Using crash dump compression: deflate Apr 17 23:37:02.124672 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:37:02.124691 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:37:02.124710 kernel: Segment Routing with IPv6 Apr 17 23:37:02.124728 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:37:02.124751 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:37:02.124770 kernel: Key type dns_resolver registered Apr 17 23:37:02.124788 kernel: IPI shorthand broadcast: enabled Apr 17 23:37:02.124807 kernel: sched_clock: Marking stable (871005186, 144596032)->(1063328508, -47727290) Apr 17 23:37:02.124826 kernel: registered taskstats version 1 Apr 17 23:37:02.124844 kernel: Loading compiled-in X.509 certificates Apr 17 23:37:02.124862 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:37:02.124880 kernel: Key type .fscrypt registered Apr 17 23:37:02.124898 kernel: Key type fscrypt-provisioning registered Apr 17 23:37:02.124920 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:37:02.124945 kernel: ima: No architecture policies found Apr 17 23:37:02.124964 kernel: clk: Disabling unused clocks Apr 17 23:37:02.124982 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:37:02.125000 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:37:02.125019 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:37:02.125037 kernel: Run /init as init process Apr 17 23:37:02.125079 kernel: with arguments: Apr 17 23:37:02.125098 kernel: /init Apr 17 23:37:02.125121 kernel: with environment: Apr 17 23:37:02.125138 kernel: HOME=/ Apr 17 23:37:02.125156 kernel: TERM=linux Apr 17 23:37:02.125175 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:37:02.125197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:37:02.125221 systemd[1]: Detected virtualization google. Apr 17 23:37:02.125240 systemd[1]: Detected architecture x86-64. Apr 17 23:37:02.125263 systemd[1]: Running in initrd. Apr 17 23:37:02.125281 systemd[1]: No hostname configured, using default hostname. Apr 17 23:37:02.125300 systemd[1]: Hostname set to . Apr 17 23:37:02.125320 systemd[1]: Initializing machine ID from random generator. Apr 17 23:37:02.125340 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:37:02.125360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:02.125379 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:02.125399 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:37:02.125423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:37:02.125442 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:37:02.125462 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:37:02.125484 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:37:02.125504 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:37:02.125523 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:02.125543 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:02.125568 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:37:02.125609 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:37:02.125649 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:37:02.125673 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:37:02.125694 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:37:02.125714 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:37:02.125734 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:37:02.125759 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:37:02.125779 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:02.125799 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:02.125820 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:02.125840 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:37:02.125860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:37:02.125880 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:37:02.125900 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:37:02.125925 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:37:02.125952 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:37:02.125972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:37:02.125992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:02.126098 systemd-journald[184]: Collecting audit messages is disabled. Apr 17 23:37:02.126148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:37:02.126169 systemd-journald[184]: Journal started Apr 17 23:37:02.126208 systemd-journald[184]: Runtime Journal (/run/log/journal/39a89608ea814fee83e6ac5262f77da9) is 8.0M, max 148.7M, 140.7M free. Apr 17 23:37:02.130513 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:02.136114 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:37:02.136642 systemd-modules-load[185]: Inserted module 'overlay' Apr 17 23:37:02.143187 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:37:02.158289 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:37:02.161235 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:37:02.164504 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:02.175727 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:02.191374 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:37:02.194575 kernel: Bridge firewalling registered Apr 17 23:37:02.193732 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 17 23:37:02.195692 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:02.201639 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:37:02.205274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:37:02.207651 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:37:02.231846 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:02.240036 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:02.247556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:02.256552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:02.267269 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:37:02.274164 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:37:02.306042 dracut-cmdline[217]: dracut-dracut-053 Apr 17 23:37:02.311041 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:02.333586 systemd-resolved[219]: Positive Trust Anchors: Apr 17 23:37:02.333621 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:37:02.333690 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:37:02.340760 systemd-resolved[219]: Defaulting to hostname 'linux'. Apr 17 23:37:02.343678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:37:02.368336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:02.420101 kernel: SCSI subsystem initialized Apr 17 23:37:02.432093 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:37:02.445108 kernel: iscsi: registered transport (tcp) Apr 17 23:37:02.470994 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:37:02.471106 kernel: QLogic iSCSI HBA Driver Apr 17 23:37:02.524960 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:37:02.532278 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:37:02.576319 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:37:02.576410 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:37:02.576439 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:37:02.623105 kernel: raid6: avx2x4 gen() 18175 MB/s Apr 17 23:37:02.640088 kernel: raid6: avx2x2 gen() 18180 MB/s Apr 17 23:37:02.657535 kernel: raid6: avx2x1 gen() 14301 MB/s Apr 17 23:37:02.657569 kernel: raid6: using algorithm avx2x2 gen() 18180 MB/s Apr 17 23:37:02.675715 kernel: raid6: .... xor() 17920 MB/s, rmw enabled Apr 17 23:37:02.675772 kernel: raid6: using avx2x2 recovery algorithm Apr 17 23:37:02.699089 kernel: xor: automatically using best checksumming function avx Apr 17 23:37:02.875098 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:37:02.888751 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:37:02.896311 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:02.928334 systemd-udevd[402]: Using default interface naming scheme 'v255'. Apr 17 23:37:02.935911 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:02.944211 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:37:02.977998 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Apr 17 23:37:03.017002 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:37:03.027352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:37:03.113370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:03.148718 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:37:03.201469 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:37:03.224099 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:37:03.317225 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:37:03.317269 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:37:03.317330 kernel: blk-mq: reduced tag depth to 10240 Apr 17 23:37:03.253078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:03.350356 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 17 23:37:03.350454 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:37:03.350483 kernel: AES CTR mode by8 optimization enabled Apr 17 23:37:03.268517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:37:03.337196 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:37:03.396812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:37:03.475022 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 17 23:37:03.475434 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 17 23:37:03.475709 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 17 23:37:03.475940 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 17 23:37:03.476183 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:37:03.476407 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:37:03.476444 kernel: GPT:17805311 != 33554431 Apr 17 23:37:03.476467 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:37:03.476491 kernel: GPT:17805311 != 33554431 Apr 17 23:37:03.476513 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:37:03.476544 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:03.476569 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 17 23:37:03.397023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:03.435396 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:03.511166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:03.511444 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:03.559203 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (463) Apr 17 23:37:03.523430 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:03.588222 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (456) Apr 17 23:37:03.576551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:03.609142 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:37:03.631729 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 17 23:37:03.652013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:03.660770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 17 23:37:03.709325 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 17 23:37:03.709639 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 17 23:37:03.762471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 17 23:37:03.767300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:37:03.799298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:03.824153 disk-uuid[542]: Primary Header is updated. Apr 17 23:37:03.824153 disk-uuid[542]: Secondary Entries is updated. Apr 17 23:37:03.824153 disk-uuid[542]: Secondary Header is updated. Apr 17 23:37:03.848336 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:03.867149 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:03.871511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:03.888241 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:04.889089 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:04.890001 disk-uuid[543]: The operation has completed successfully. Apr 17 23:37:04.979227 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:37:04.979401 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:37:05.016355 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:37:05.047379 sh[568]: Success Apr 17 23:37:05.073123 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:37:05.179914 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:37:05.187461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:37:05.214695 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:37:05.268134 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:37:05.268250 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:05.268277 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:37:05.284535 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:37:05.284645 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:37:05.324086 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:37:05.334013 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:37:05.335140 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:37:05.341334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:37:05.362307 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:37:05.419271 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:05.419390 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:05.419417 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:05.445341 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:37:05.445447 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:05.472627 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:05.472012 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:37:05.484203 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:37:05.508406 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:37:05.597176 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:37:05.602345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:37:05.723560 systemd-networkd[750]: lo: Link UP Apr 17 23:37:05.723575 systemd-networkd[750]: lo: Gained carrier Apr 17 23:37:05.727142 systemd-networkd[750]: Enumeration completed Apr 17 23:37:05.737085 ignition[681]: Ignition 2.19.0 Apr 17 23:37:05.727598 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:37:05.737094 ignition[681]: Stage: fetch-offline Apr 17 23:37:05.728127 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:05.737147 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:05.728134 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:37:05.737163 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:05.730709 systemd-networkd[750]: eth0: Link UP Apr 17 23:37:05.737389 ignition[681]: parsed url from cmdline: "" Apr 17 23:37:05.730717 systemd-networkd[750]: eth0: Gained carrier Apr 17 23:37:05.737394 ignition[681]: no config URL provided Apr 17 23:37:05.730732 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:05.737401 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:37:05.743159 systemd-networkd[750]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:37:05.737413 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:37:05.743177 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.110/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 17 23:37:05.737422 ignition[681]: failed to fetch config: resource requires networking Apr 17 23:37:05.751709 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:37:05.737714 ignition[681]: Ignition finished successfully Apr 17 23:37:05.770066 systemd[1]: Reached target network.target - Network. Apr 17 23:37:05.836268 ignition[759]: Ignition 2.19.0 Apr 17 23:37:05.799337 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:37:05.836287 ignition[759]: Stage: fetch Apr 17 23:37:05.846248 unknown[759]: fetched base config from "system" Apr 17 23:37:05.836498 ignition[759]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:05.846261 unknown[759]: fetched base config from "system" Apr 17 23:37:05.836511 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:05.846271 unknown[759]: fetched user config from "gcp" Apr 17 23:37:05.836642 ignition[759]: parsed url from cmdline: "" Apr 17 23:37:05.849216 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:37:05.836650 ignition[759]: no config URL provided Apr 17 23:37:05.874300 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:37:05.836662 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:37:05.918581 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:37:05.836673 ignition[759]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:37:05.934262 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:37:05.836698 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 17 23:37:05.992176 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:37:05.840741 ignition[759]: GET result: OK Apr 17 23:37:06.006503 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:37:05.840856 ignition[759]: parsing config with SHA512: 80c77854a799112ec0b0bd42f22aa479dd99d0fdeb9eb3df2623f8fe6526f5de87e133edff89a667bf8308575b81873293af5b5b6476916fdfca0df2c0e624db Apr 17 23:37:06.027289 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:37:05.847185 ignition[759]: fetch: fetch complete Apr 17 23:37:06.044257 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:37:05.847196 ignition[759]: fetch: fetch passed Apr 17 23:37:06.059256 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:37:05.847293 ignition[759]: Ignition finished successfully Apr 17 23:37:06.059397 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:37:05.915712 ignition[765]: Ignition 2.19.0 Apr 17 23:37:06.089322 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:37:05.915724 ignition[765]: Stage: kargs Apr 17 23:37:05.915938 ignition[765]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:05.915951 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:05.917039 ignition[765]: kargs: kargs passed Apr 17 23:37:05.917140 ignition[765]: Ignition finished successfully Apr 17 23:37:05.989502 ignition[770]: Ignition 2.19.0 Apr 17 23:37:05.989515 ignition[770]: Stage: disks Apr 17 23:37:05.989768 ignition[770]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:05.989780 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:05.990796 ignition[770]: disks: disks passed Apr 17 23:37:05.990863 ignition[770]: Ignition finished successfully Apr 17 23:37:06.149563 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 17 23:37:06.346923 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:37:06.353187 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:37:06.508087 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:37:06.508881 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:37:06.509848 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:37:06.549254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:37:06.566238 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:37:06.584446 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:37:06.584542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:37:06.670259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (787) Apr 17 23:37:06.670355 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:06.670379 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:06.670404 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:06.670421 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:37:06.670435 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:06.584586 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:37:06.608188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:37:06.680177 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:37:06.695283 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:37:06.870555 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:37:06.881221 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:37:06.891237 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:37:06.901253 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:37:06.981324 systemd-networkd[750]: eth0: Gained IPv6LL Apr 17 23:37:07.054707 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:37:07.070219 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:37:07.089325 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:37:07.116293 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:37:07.135222 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:07.151111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:37:07.166449 ignition[900]: INFO : Ignition 2.19.0 Apr 17 23:37:07.166449 ignition[900]: INFO : Stage: mount Apr 17 23:37:07.194363 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:07.194363 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:07.194363 ignition[900]: INFO : mount: mount passed Apr 17 23:37:07.194363 ignition[900]: INFO : Ignition finished successfully Apr 17 23:37:07.170641 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:37:07.193245 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:37:07.517432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:37:07.549088 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (911) Apr 17 23:37:07.567833 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:07.567935 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:07.567961 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:07.591412 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:37:07.591502 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:07.594609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:37:07.633877 ignition[928]: INFO : Ignition 2.19.0 Apr 17 23:37:07.633877 ignition[928]: INFO : Stage: files Apr 17 23:37:07.648720 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:07.648720 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:07.648720 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:37:07.648720 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:37:07.648720 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:37:07.648720 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:37:07.648720 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:37:07.648720 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:37:07.646443 unknown[928]: wrote ssh authorized keys file for user: core Apr 17 23:37:07.752229 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:37:07.752229 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:37:07.786237 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:37:07.941035 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:37:07.941035 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 17 23:37:23.403939 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:37:24.113584 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:37:24.113584 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:37:24.132410 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:37:24.132410 ignition[928]: INFO : files: files passed Apr 17 23:37:24.132410 ignition[928]: INFO : Ignition finished successfully Apr 17 23:37:24.118673 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:37:24.161270 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:37:24.192299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:37:24.206778 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:37:24.365264 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:24.365264 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:24.206907 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:37:24.403244 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:24.300777 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:37:24.314542 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:37:24.340321 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:37:24.420949 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:37:24.421124 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:37:24.442082 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:37:24.461277 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:37:24.481375 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:37:24.488255 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:37:24.573191 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:37:24.579283 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:37:24.625331 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:24.636376 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:24.657509 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:37:24.675422 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:37:24.675586 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:37:24.702485 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:37:24.723434 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:37:24.741521 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:37:24.759482 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:37:24.780454 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:37:24.802660 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:37:24.823465 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:37:24.844559 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:37:24.865471 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:37:24.885436 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:37:24.903365 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:37:24.903539 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:37:24.929480 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:24.949438 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:24.970358 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:37:24.970699 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:24.992348 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:37:24.992523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:37:25.023490 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:37:25.023847 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:37:25.043524 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:37:25.043772 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:37:25.102367 ignition[980]: INFO : Ignition 2.19.0 Apr 17 23:37:25.102367 ignition[980]: INFO : Stage: umount Apr 17 23:37:25.102367 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:25.102367 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:25.102367 ignition[980]: INFO : umount: umount passed Apr 17 23:37:25.102367 ignition[980]: INFO : Ignition finished successfully Apr 17 23:37:25.070331 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:37:25.110189 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:37:25.110474 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:25.135492 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:37:25.198214 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:37:25.198504 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:25.220496 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:37:25.220660 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:37:25.253028 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:37:25.254146 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:37:25.254272 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:37:25.270800 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:37:25.270916 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:37:25.292451 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:37:25.292575 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:37:25.313502 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:37:25.313567 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:37:25.329427 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:37:25.329506 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:37:25.354448 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:37:25.354528 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:37:25.363488 systemd[1]: Stopped target network.target - Network. Apr 17 23:37:25.393327 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:37:25.393450 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:37:25.404495 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:37:25.437227 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:37:25.441150 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:25.463237 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:37:25.488260 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:37:25.496490 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:37:25.496553 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:37:25.531434 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:37:25.531506 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:37:25.557427 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:37:25.557524 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:37:25.586461 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:37:25.586562 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:37:25.611443 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:37:25.611527 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:37:25.630652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:37:25.635124 systemd-networkd[750]: eth0: DHCPv6 lease lost Apr 17 23:37:25.648458 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:37:25.668856 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:37:25.669001 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:37:25.694911 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:37:25.695186 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:37:25.705065 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:37:25.705239 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:25.747255 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:37:25.767210 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:37:25.767337 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:37:25.778406 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:37:25.778485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:25.785484 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:37:25.785571 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:25.803503 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:37:25.803617 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:25.831579 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:25.852825 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:37:25.853013 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:25.878795 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:37:25.878866 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:25.891438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:37:25.891495 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:25.919411 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:37:25.919501 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:37:25.949553 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:37:25.949644 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:37:25.987501 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:37:25.987625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:26.036322 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:37:26.257099 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 17 23:37:26.054438 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:37:26.054521 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:26.071522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:26.071611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:26.103988 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:37:26.104161 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:37:26.123695 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:37:26.123827 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:37:26.153815 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:37:26.185316 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:37:26.220605 systemd[1]: Switching root. Apr 17 23:37:26.363201 systemd-journald[184]: Journal stopped Apr 17 23:37:02.112200 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:37:02.112268 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:02.112286 kernel: BIOS-provided physical RAM map: Apr 17 23:37:02.112300 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 17 23:37:02.112314 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 17 23:37:02.112328 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 17 23:37:02.112344 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 17 23:37:02.112363 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 17 23:37:02.112377 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 17 23:37:02.112391 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 17 23:37:02.112406 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 17 23:37:02.112420 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 17 23:37:02.112433 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 17 23:37:02.112447 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 17 23:37:02.112469 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 17 23:37:02.112486 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 17 23:37:02.112502 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 17 23:37:02.112518 kernel: NX (Execute Disable) protection: active Apr 17 23:37:02.112532 kernel: APIC: Static calls initialized Apr 17 23:37:02.112549 kernel: efi: EFI v2.7 by EDK II Apr 17 23:37:02.112566 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd300018 Apr 17 23:37:02.112583 kernel: SMBIOS 2.4 present. Apr 17 23:37:02.112600 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026 Apr 17 23:37:02.112616 kernel: Hypervisor detected: KVM Apr 17 23:37:02.112637 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:37:02.112654 kernel: kvm-clock: using sched offset of 13357809649 cycles Apr 17 23:37:02.112669 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:37:02.112685 kernel: tsc: Detected 2299.998 MHz processor Apr 17 23:37:02.112702 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:37:02.112720 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:37:02.112737 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 17 23:37:02.112754 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 17 23:37:02.112770 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:37:02.112791 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 17 23:37:02.112807 kernel: Using GB pages for direct mapping Apr 17 23:37:02.112824 kernel: Secure boot disabled Apr 17 23:37:02.112839 kernel: ACPI: Early table checksum verification disabled Apr 17 23:37:02.112856 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 17 23:37:02.112873 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 17 23:37:02.112889 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 17 23:37:02.112913 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 17 23:37:02.112957 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 17 23:37:02.112976 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250807) Apr 17 23:37:02.112994 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 17 23:37:02.113012 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 17 23:37:02.113030 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 17 23:37:02.113096 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 17 23:37:02.113120 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 17 23:37:02.113139 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 17 23:37:02.113157 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 17 23:37:02.113175 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 17 23:37:02.113194 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 17 23:37:02.113211 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 17 23:37:02.113229 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 17 23:37:02.113246 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 17 23:37:02.113265 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 17 23:37:02.113287 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 17 23:37:02.113305 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:37:02.113323 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:37:02.113341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 17 23:37:02.113359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 17 23:37:02.113378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 17 23:37:02.113396 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 17 23:37:02.113415 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 17 23:37:02.113433 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Apr 17 23:37:02.113455 kernel: Zone ranges: Apr 17 23:37:02.113474 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:37:02.113492 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:37:02.113511 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 17 23:37:02.113529 kernel: Movable zone start for each node Apr 17 23:37:02.113547 kernel: Early memory node ranges Apr 17 23:37:02.113565 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 17 23:37:02.113583 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 17 23:37:02.113602 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 17 23:37:02.113619 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 17 23:37:02.113642 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 17 23:37:02.113661 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 17 23:37:02.113679 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:37:02.113695 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 17 23:37:02.113713 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 17 23:37:02.113731 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 17 23:37:02.113750 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 17 23:37:02.113768 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:37:02.113786 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:37:02.113809 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:37:02.113828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:37:02.113846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:37:02.113865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:37:02.113883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:37:02.113901 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:37:02.113919 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:37:02.113944 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 17 23:37:02.113962 kernel: Booting paravirtualized kernel on KVM Apr 17 23:37:02.113985 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:37:02.114004 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:37:02.114021 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:37:02.114040 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:37:02.114077 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:37:02.114094 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:37:02.114109 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:37:02.114126 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:02.114148 kernel: random: crng init done Apr 17 23:37:02.114164 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 17 23:37:02.114180 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:37:02.114196 kernel: Fallback order for Node 0: 0 Apr 17 23:37:02.114212 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 17 23:37:02.114228 kernel: Policy zone: Normal Apr 17 23:37:02.114245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:37:02.114262 kernel: software IO TLB: area num 2. Apr 17 23:37:02.114279 kernel: Memory: 7513248K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347076K reserved, 0K cma-reserved) Apr 17 23:37:02.114299 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:37:02.114316 kernel: Kernel/User page tables isolation: enabled Apr 17 23:37:02.114332 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:37:02.114349 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:37:02.114366 kernel: Dynamic Preempt: voluntary Apr 17 23:37:02.114383 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:37:02.114402 kernel: rcu: RCU event tracing is enabled. Apr 17 23:37:02.114419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:37:02.114454 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:37:02.114472 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:37:02.114491 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:37:02.114509 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:37:02.114531 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:37:02.114549 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:37:02.114567 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:37:02.114586 kernel: Console: colour dummy device 80x25 Apr 17 23:37:02.114608 kernel: printk: console [ttyS0] enabled Apr 17 23:37:02.114624 kernel: ACPI: Core revision 20230628 Apr 17 23:37:02.114640 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:37:02.114659 kernel: x2apic enabled Apr 17 23:37:02.114677 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:37:02.114695 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 17 23:37:02.114712 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 17 23:37:02.114730 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 17 23:37:02.114749 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 17 23:37:02.114773 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 17 23:37:02.114792 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:37:02.114808 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 17 23:37:02.114825 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 17 23:37:02.114844 kernel: Spectre V2 : Mitigation: IBRS Apr 17 23:37:02.114862 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:37:02.114878 kernel: RETBleed: Mitigation: IBRS Apr 17 23:37:02.114896 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 23:37:02.114914 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 17 23:37:02.114947 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 23:37:02.114965 kernel: MDS: Mitigation: Clear CPU buffers Apr 17 23:37:02.114983 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:37:02.115001 kernel: active return thunk: its_return_thunk Apr 17 23:37:02.115019 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:37:02.115036 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:37:02.115091 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:37:02.115112 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:37:02.115131 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:37:02.115154 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 17 23:37:02.115172 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:37:02.115191 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:37:02.115210 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:37:02.115228 kernel: landlock: Up and running. Apr 17 23:37:02.115246 kernel: SELinux: Initializing. Apr 17 23:37:02.115266 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:37:02.115284 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:37:02.115304 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 17 23:37:02.115328 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:02.115347 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:02.115363 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:02.115381 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 17 23:37:02.115401 kernel: signal: max sigframe size: 1776 Apr 17 23:37:02.115420 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:37:02.115441 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:37:02.115461 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:37:02.115480 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:37:02.115504 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:37:02.115523 kernel: .... node #0, CPUs: #1 Apr 17 23:37:02.115542 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:37:02.115563 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:37:02.115583 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:37:02.115602 kernel: smpboot: Max logical packages: 1 Apr 17 23:37:02.115622 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 17 23:37:02.115642 kernel: devtmpfs: initialized Apr 17 23:37:02.115666 kernel: x86/mm: Memory block size: 128MB Apr 17 23:37:02.115686 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 17 23:37:02.115705 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:37:02.115725 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:37:02.115745 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:37:02.115765 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:37:02.115784 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:37:02.115803 kernel: audit: type=2000 audit(1776469020.282:1): state=initialized audit_enabled=0 res=1 Apr 17 23:37:02.115822 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:37:02.115845 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:37:02.115864 kernel: cpuidle: using governor menu Apr 17 23:37:02.115884 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:37:02.115904 kernel: dca service started, version 1.12.1 Apr 17 23:37:02.115923 kernel: PCI: Using configuration type 1 for base access Apr 17 23:37:02.115950 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:37:02.115971 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:37:02.115990 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:37:02.116010 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:37:02.116034 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:37:02.116068 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:37:02.116088 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:37:02.116108 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:37:02.116128 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:37:02.116148 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:37:02.116167 kernel: ACPI: Interpreter enabled Apr 17 23:37:02.116188 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:37:02.116207 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:37:02.116227 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:37:02.116252 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 17 23:37:02.116272 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 17 23:37:02.116292 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:37:02.116592 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:37:02.116801 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:37:02.117005 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:37:02.117030 kernel: PCI host bridge to bus 0000:00 Apr 17 23:37:02.117272 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:37:02.117455 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:37:02.117631 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:37:02.117807 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 17 23:37:02.117994 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:37:02.118253 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:37:02.118475 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 17 23:37:02.118679 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 17 23:37:02.118877 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:37:02.119109 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 17 23:37:02.119309 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 17 23:37:02.119503 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 17 23:37:02.119704 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:37:02.119907 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 17 23:37:02.120134 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 17 23:37:02.120344 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:37:02.120541 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 17 23:37:02.120742 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 17 23:37:02.120768 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:37:02.120787 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:37:02.120812 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:37:02.120831 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:37:02.120851 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:37:02.120871 kernel: iommu: Default domain type: Translated Apr 17 23:37:02.120887 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:37:02.120904 kernel: efivars: Registered efivars operations Apr 17 23:37:02.120921 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:37:02.120949 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:37:02.120968 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 17 23:37:02.120991 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 17 23:37:02.121010 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 17 23:37:02.121028 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 17 23:37:02.121096 kernel: vgaarb: loaded Apr 17 23:37:02.121115 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:37:02.121135 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:37:02.121153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:37:02.121171 kernel: pnp: PnP ACPI init Apr 17 23:37:02.121190 kernel: pnp: PnP ACPI: found 7 devices Apr 17 23:37:02.121215 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:37:02.121234 kernel: NET: Registered PF_INET protocol family Apr 17 23:37:02.121252 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:37:02.121272 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 17 23:37:02.121290 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:37:02.121307 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:37:02.121323 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 17 23:37:02.121343 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 17 23:37:02.121370 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:37:02.121390 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:37:02.121411 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:37:02.121431 kernel: NET: Registered PF_XDP protocol family Apr 17 23:37:02.121703 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:37:02.121896 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:37:02.122117 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:37:02.122294 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 17 23:37:02.122508 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:37:02.122532 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:37:02.122551 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:37:02.122570 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 17 23:37:02.122589 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:37:02.122608 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 17 23:37:02.122626 kernel: clocksource: Switched to clocksource tsc Apr 17 23:37:02.122645 kernel: Initialise system trusted keyrings Apr 17 23:37:02.122669 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 17 23:37:02.122687 kernel: Key type asymmetric registered Apr 17 23:37:02.122705 kernel: Asymmetric key parser 'x509' registered Apr 17 23:37:02.122723 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:37:02.122742 kernel: io scheduler mq-deadline registered Apr 17 23:37:02.122761 kernel: io scheduler kyber registered Apr 17 23:37:02.122779 kernel: io scheduler bfq registered Apr 17 23:37:02.122798 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:37:02.122817 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 17 23:37:02.123014 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 17 23:37:02.123038 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 17 23:37:02.123252 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 17 23:37:02.123276 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 17 23:37:02.123457 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 17 23:37:02.123480 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:37:02.123499 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:37:02.123518 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 17 23:37:02.123536 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 17 23:37:02.123560 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 17 23:37:02.123754 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 17 23:37:02.123779 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:37:02.123798 kernel: i8042: Warning: Keylock active Apr 17 23:37:02.123815 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:37:02.123834 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:37:02.124035 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:37:02.124270 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:37:02.124445 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:37:01 UTC (1776469021) Apr 17 23:37:02.124613 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:37:02.124635 kernel: intel_pstate: CPU model not supported Apr 17 23:37:02.124654 kernel: pstore: Using crash dump compression: deflate Apr 17 23:37:02.124672 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:37:02.124691 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:37:02.124710 kernel: Segment Routing with IPv6 Apr 17 23:37:02.124728 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:37:02.124751 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:37:02.124770 kernel: Key type dns_resolver registered Apr 17 23:37:02.124788 kernel: IPI shorthand broadcast: enabled Apr 17 23:37:02.124807 kernel: sched_clock: Marking stable (871005186, 144596032)->(1063328508, -47727290) Apr 17 23:37:02.124826 kernel: registered taskstats version 1 Apr 17 23:37:02.124844 kernel: Loading compiled-in X.509 certificates Apr 17 23:37:02.124862 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:37:02.124880 kernel: Key type .fscrypt registered Apr 17 23:37:02.124898 kernel: Key type fscrypt-provisioning registered Apr 17 23:37:02.124920 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:37:02.124945 kernel: ima: No architecture policies found Apr 17 23:37:02.124964 kernel: clk: Disabling unused clocks Apr 17 23:37:02.124982 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:37:02.125000 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:37:02.125019 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:37:02.125037 kernel: Run /init as init process Apr 17 23:37:02.125079 kernel: with arguments: Apr 17 23:37:02.125098 kernel: /init Apr 17 23:37:02.125121 kernel: with environment: Apr 17 23:37:02.125138 kernel: HOME=/ Apr 17 23:37:02.125156 kernel: TERM=linux Apr 17 23:37:02.125175 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:37:02.125197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:37:02.125221 systemd[1]: Detected virtualization google. Apr 17 23:37:02.125240 systemd[1]: Detected architecture x86-64. Apr 17 23:37:02.125263 systemd[1]: Running in initrd. Apr 17 23:37:02.125281 systemd[1]: No hostname configured, using default hostname. Apr 17 23:37:02.125300 systemd[1]: Hostname set to . Apr 17 23:37:02.125320 systemd[1]: Initializing machine ID from random generator. Apr 17 23:37:02.125340 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:37:02.125360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:02.125379 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:02.125399 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:37:02.125423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:37:02.125442 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:37:02.125462 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:37:02.125484 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:37:02.125504 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:37:02.125523 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:02.125543 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:02.125568 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:37:02.125609 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:37:02.125649 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:37:02.125673 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:37:02.125694 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:37:02.125714 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:37:02.125734 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:37:02.125759 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:37:02.125779 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:02.125799 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:02.125820 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:02.125840 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:37:02.125860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:37:02.125880 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:37:02.125900 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:37:02.125925 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:37:02.125952 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:37:02.125972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:37:02.125992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:02.126098 systemd-journald[184]: Collecting audit messages is disabled. Apr 17 23:37:02.126148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:37:02.126169 systemd-journald[184]: Journal started Apr 17 23:37:02.126208 systemd-journald[184]: Runtime Journal (/run/log/journal/39a89608ea814fee83e6ac5262f77da9) is 8.0M, max 148.7M, 140.7M free. Apr 17 23:37:02.130513 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:02.136114 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:37:02.136642 systemd-modules-load[185]: Inserted module 'overlay' Apr 17 23:37:02.143187 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:37:02.158289 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:37:02.161235 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:37:02.164504 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:02.175727 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:02.191374 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:37:02.194575 kernel: Bridge firewalling registered Apr 17 23:37:02.193732 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 17 23:37:02.195692 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:02.201639 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:37:02.205274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:37:02.207651 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:37:02.231846 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:02.240036 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:02.247556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:02.256552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:02.267269 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:37:02.274164 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:37:02.306042 dracut-cmdline[217]: dracut-dracut-053 Apr 17 23:37:02.311041 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:02.333586 systemd-resolved[219]: Positive Trust Anchors: Apr 17 23:37:02.333621 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:37:02.333690 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:37:02.340760 systemd-resolved[219]: Defaulting to hostname 'linux'. Apr 17 23:37:02.343678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:37:02.368336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:02.420101 kernel: SCSI subsystem initialized Apr 17 23:37:02.432093 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:37:02.445108 kernel: iscsi: registered transport (tcp) Apr 17 23:37:02.470994 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:37:02.471106 kernel: QLogic iSCSI HBA Driver Apr 17 23:37:02.524960 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:37:02.532278 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:37:02.576319 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:37:02.576410 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:37:02.576439 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:37:02.623105 kernel: raid6: avx2x4 gen() 18175 MB/s Apr 17 23:37:02.640088 kernel: raid6: avx2x2 gen() 18180 MB/s Apr 17 23:37:02.657535 kernel: raid6: avx2x1 gen() 14301 MB/s Apr 17 23:37:02.657569 kernel: raid6: using algorithm avx2x2 gen() 18180 MB/s Apr 17 23:37:02.675715 kernel: raid6: .... xor() 17920 MB/s, rmw enabled Apr 17 23:37:02.675772 kernel: raid6: using avx2x2 recovery algorithm Apr 17 23:37:02.699089 kernel: xor: automatically using best checksumming function avx Apr 17 23:37:02.875098 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:37:02.888751 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:37:02.896311 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:02.928334 systemd-udevd[402]: Using default interface naming scheme 'v255'. Apr 17 23:37:02.935911 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:02.944211 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:37:02.977998 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Apr 17 23:37:03.017002 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:37:03.027352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:37:03.113370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:03.148718 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:37:03.201469 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:37:03.224099 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:37:03.317225 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:37:03.317269 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:37:03.317330 kernel: blk-mq: reduced tag depth to 10240 Apr 17 23:37:03.253078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:03.350356 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 17 23:37:03.350454 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:37:03.350483 kernel: AES CTR mode by8 optimization enabled Apr 17 23:37:03.268517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:37:03.337196 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:37:03.396812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:37:03.475022 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 17 23:37:03.475434 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 17 23:37:03.475709 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 17 23:37:03.475940 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 17 23:37:03.476183 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:37:03.476407 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:37:03.476444 kernel: GPT:17805311 != 33554431 Apr 17 23:37:03.476467 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:37:03.476491 kernel: GPT:17805311 != 33554431 Apr 17 23:37:03.476513 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:37:03.476544 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:03.476569 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 17 23:37:03.397023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:03.435396 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:03.511166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:03.511444 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:03.559203 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (463) Apr 17 23:37:03.523430 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:03.588222 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (456) Apr 17 23:37:03.576551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:03.609142 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:37:03.631729 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 17 23:37:03.652013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:03.660770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 17 23:37:03.709325 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 17 23:37:03.709639 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 17 23:37:03.762471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 17 23:37:03.767300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:37:03.799298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:03.824153 disk-uuid[542]: Primary Header is updated. Apr 17 23:37:03.824153 disk-uuid[542]: Secondary Entries is updated. Apr 17 23:37:03.824153 disk-uuid[542]: Secondary Header is updated. Apr 17 23:37:03.848336 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:03.867149 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:03.871511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:03.888241 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:04.889089 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:04.890001 disk-uuid[543]: The operation has completed successfully. Apr 17 23:37:04.979227 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:37:04.979401 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:37:05.016355 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:37:05.047379 sh[568]: Success Apr 17 23:37:05.073123 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:37:05.179914 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:37:05.187461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:37:05.214695 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:37:05.268134 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:37:05.268250 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:05.268277 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:37:05.284535 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:37:05.284645 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:37:05.324086 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:37:05.334013 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:37:05.335140 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:37:05.341334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:37:05.362307 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:37:05.419271 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:05.419390 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:05.419417 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:05.445341 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:37:05.445447 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:05.472627 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:05.472012 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:37:05.484203 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:37:05.508406 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:37:05.597176 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:37:05.602345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:37:05.723560 systemd-networkd[750]: lo: Link UP Apr 17 23:37:05.723575 systemd-networkd[750]: lo: Gained carrier Apr 17 23:37:05.727142 systemd-networkd[750]: Enumeration completed Apr 17 23:37:05.737085 ignition[681]: Ignition 2.19.0 Apr 17 23:37:05.727598 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:37:05.737094 ignition[681]: Stage: fetch-offline Apr 17 23:37:05.728127 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:05.737147 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:05.728134 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:37:05.737163 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:05.730709 systemd-networkd[750]: eth0: Link UP Apr 17 23:37:05.737389 ignition[681]: parsed url from cmdline: "" Apr 17 23:37:05.730717 systemd-networkd[750]: eth0: Gained carrier Apr 17 23:37:05.737394 ignition[681]: no config URL provided Apr 17 23:37:05.730732 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:05.737401 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:37:05.743159 systemd-networkd[750]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:37:05.737413 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:37:05.743177 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.110/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 17 23:37:05.737422 ignition[681]: failed to fetch config: resource requires networking Apr 17 23:37:05.751709 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:37:05.737714 ignition[681]: Ignition finished successfully Apr 17 23:37:05.770066 systemd[1]: Reached target network.target - Network. Apr 17 23:37:05.836268 ignition[759]: Ignition 2.19.0 Apr 17 23:37:05.799337 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:37:05.836287 ignition[759]: Stage: fetch Apr 17 23:37:05.846248 unknown[759]: fetched base config from "system" Apr 17 23:37:05.836498 ignition[759]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:05.846261 unknown[759]: fetched base config from "system" Apr 17 23:37:05.836511 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:05.846271 unknown[759]: fetched user config from "gcp" Apr 17 23:37:05.836642 ignition[759]: parsed url from cmdline: "" Apr 17 23:37:05.849216 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:37:05.836650 ignition[759]: no config URL provided Apr 17 23:37:05.874300 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:37:05.836662 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:37:05.918581 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:37:05.836673 ignition[759]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:37:05.934262 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:37:05.836698 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 17 23:37:05.992176 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:37:05.840741 ignition[759]: GET result: OK Apr 17 23:37:06.006503 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:37:05.840856 ignition[759]: parsing config with SHA512: 80c77854a799112ec0b0bd42f22aa479dd99d0fdeb9eb3df2623f8fe6526f5de87e133edff89a667bf8308575b81873293af5b5b6476916fdfca0df2c0e624db Apr 17 23:37:06.027289 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:37:05.847185 ignition[759]: fetch: fetch complete Apr 17 23:37:06.044257 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:37:05.847196 ignition[759]: fetch: fetch passed Apr 17 23:37:06.059256 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:37:05.847293 ignition[759]: Ignition finished successfully Apr 17 23:37:06.059397 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:37:05.915712 ignition[765]: Ignition 2.19.0 Apr 17 23:37:06.089322 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:37:05.915724 ignition[765]: Stage: kargs Apr 17 23:37:05.915938 ignition[765]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:05.915951 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:05.917039 ignition[765]: kargs: kargs passed Apr 17 23:37:05.917140 ignition[765]: Ignition finished successfully Apr 17 23:37:05.989502 ignition[770]: Ignition 2.19.0 Apr 17 23:37:05.989515 ignition[770]: Stage: disks Apr 17 23:37:05.989768 ignition[770]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:05.989780 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:05.990796 ignition[770]: disks: disks passed Apr 17 23:37:05.990863 ignition[770]: Ignition finished successfully Apr 17 23:37:06.149563 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 17 23:37:06.346923 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:37:06.353187 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:37:06.508087 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:37:06.508881 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:37:06.509848 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:37:06.549254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:37:06.566238 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:37:06.584446 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:37:06.584542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:37:06.670259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (787) Apr 17 23:37:06.670355 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:06.670379 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:06.670404 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:06.670421 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:37:06.670435 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:06.584586 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:37:06.608188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:37:06.680177 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:37:06.695283 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:37:06.870555 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:37:06.881221 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:37:06.891237 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:37:06.901253 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:37:06.981324 systemd-networkd[750]: eth0: Gained IPv6LL Apr 17 23:37:07.054707 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:37:07.070219 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:37:07.089325 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:37:07.116293 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:37:07.135222 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:07.151111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:37:07.166449 ignition[900]: INFO : Ignition 2.19.0 Apr 17 23:37:07.166449 ignition[900]: INFO : Stage: mount Apr 17 23:37:07.194363 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:07.194363 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:07.194363 ignition[900]: INFO : mount: mount passed Apr 17 23:37:07.194363 ignition[900]: INFO : Ignition finished successfully Apr 17 23:37:07.170641 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:37:07.193245 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:37:07.517432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:37:07.549088 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (911) Apr 17 23:37:07.567833 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:07.567935 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:07.567961 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:07.591412 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:37:07.591502 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:07.594609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:37:07.633877 ignition[928]: INFO : Ignition 2.19.0 Apr 17 23:37:07.633877 ignition[928]: INFO : Stage: files Apr 17 23:37:07.648720 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:07.648720 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:07.648720 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:37:07.648720 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:37:07.648720 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:37:07.648720 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:37:07.648720 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:37:07.648720 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:37:07.646443 unknown[928]: wrote ssh authorized keys file for user: core Apr 17 23:37:07.752229 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:37:07.752229 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:37:07.786237 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:37:07.941035 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:37:07.941035 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:37:07.974223 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 17 23:37:23.403939 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:37:24.113584 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:37:24.113584 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:37:24.132410 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:37:24.132410 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:37:24.132410 ignition[928]: INFO : files: files passed Apr 17 23:37:24.132410 ignition[928]: INFO : Ignition finished successfully Apr 17 23:37:24.118673 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:37:24.161270 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:37:24.192299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:37:24.206778 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:37:24.365264 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:24.365264 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:24.206907 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:37:24.403244 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:24.300777 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:37:24.314542 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:37:24.340321 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:37:24.420949 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:37:24.421124 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:37:24.442082 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:37:24.461277 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:37:24.481375 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:37:24.488255 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:37:24.573191 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:37:24.579283 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:37:24.625331 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:24.636376 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:24.657509 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:37:24.675422 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:37:24.675586 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:37:24.702485 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:37:24.723434 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:37:24.741521 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:37:24.759482 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:37:24.780454 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:37:24.802660 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:37:24.823465 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:37:24.844559 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:37:24.865471 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:37:24.885436 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:37:24.903365 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:37:24.903539 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:37:24.929480 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:24.949438 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:24.970358 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:37:24.970699 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:24.992348 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:37:24.992523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:37:25.023490 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:37:25.023847 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:37:25.043524 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:37:25.043772 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:37:25.102367 ignition[980]: INFO : Ignition 2.19.0 Apr 17 23:37:25.102367 ignition[980]: INFO : Stage: umount Apr 17 23:37:25.102367 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:25.102367 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:37:25.102367 ignition[980]: INFO : umount: umount passed Apr 17 23:37:25.102367 ignition[980]: INFO : Ignition finished successfully Apr 17 23:37:25.070331 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:37:25.110189 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:37:25.110474 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:25.135492 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:37:25.198214 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:37:25.198504 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:25.220496 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:37:25.220660 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:37:25.253028 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:37:25.254146 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:37:25.254272 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:37:25.270800 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:37:25.270916 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:37:25.292451 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:37:25.292575 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:37:25.313502 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:37:25.313567 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:37:25.329427 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:37:25.329506 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:37:25.354448 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:37:25.354528 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:37:25.363488 systemd[1]: Stopped target network.target - Network. Apr 17 23:37:25.393327 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:37:25.393450 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:37:25.404495 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:37:25.437227 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:37:25.441150 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:25.463237 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:37:25.488260 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:37:25.496490 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:37:25.496553 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:37:25.531434 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:37:25.531506 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:37:25.557427 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:37:25.557524 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:37:25.586461 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:37:25.586562 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:37:25.611443 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:37:25.611527 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:37:25.630652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:37:25.635124 systemd-networkd[750]: eth0: DHCPv6 lease lost Apr 17 23:37:25.648458 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:37:25.668856 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:37:25.669001 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:37:25.694911 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:37:25.695186 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:37:25.705065 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:37:25.705239 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:25.747255 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:37:25.767210 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:37:25.767337 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:37:25.778406 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:37:25.778485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:25.785484 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:37:25.785571 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:25.803503 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:37:25.803617 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:25.831579 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:25.852825 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:37:25.853013 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:25.878795 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:37:25.878866 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:25.891438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:37:25.891495 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:25.919411 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:37:25.919501 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:37:25.949553 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:37:25.949644 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:37:25.987501 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:37:25.987625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:26.036322 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:37:26.257099 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 17 23:37:26.054438 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:37:26.054521 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:26.071522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:26.071611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:26.103988 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:37:26.104161 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:37:26.123695 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:37:26.123827 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:37:26.153815 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:37:26.185316 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:37:26.220605 systemd[1]: Switching root. Apr 17 23:37:26.363201 systemd-journald[184]: Journal stopped Apr 17 23:37:29.027000 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:37:29.027057 kernel: SELinux: policy capability open_perms=1 Apr 17 23:37:29.027080 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:37:29.027095 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:37:29.027106 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:37:29.027117 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:37:29.027130 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:37:29.027146 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:37:29.027157 kernel: audit: type=1403 audit(1776469046.818:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:37:29.027172 systemd[1]: Successfully loaded SELinux policy in 82.177ms. Apr 17 23:37:29.027187 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.766ms. Apr 17 23:37:29.027201 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:37:29.027214 systemd[1]: Detected virtualization google. Apr 17 23:37:29.027229 systemd[1]: Detected architecture x86-64. Apr 17 23:37:29.027246 systemd[1]: Detected first boot. Apr 17 23:37:29.027260 systemd[1]: Initializing machine ID from random generator. Apr 17 23:37:29.027274 zram_generator::config[1021]: No configuration found. Apr 17 23:37:29.027291 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:37:29.027304 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:37:29.027321 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:37:29.027335 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:37:29.027459 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:37:29.027478 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:37:29.027491 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:37:29.027505 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:37:29.027519 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:37:29.027543 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:37:29.027557 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:37:29.027570 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:37:29.027584 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:29.027597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:29.027611 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:37:29.027624 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:37:29.027638 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:37:29.027655 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:37:29.027668 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:37:29.027682 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:29.027695 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:37:29.027708 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:37:29.027722 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:37:29.027741 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:37:29.027755 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:29.027768 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:37:29.027789 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:37:29.027803 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:37:29.027817 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:37:29.027832 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:37:29.027846 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:29.027859 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:29.027873 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:29.027891 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:37:29.027905 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:37:29.027919 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:37:29.027933 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:37:29.027947 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:29.027964 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:37:29.027979 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:37:29.027993 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:37:29.028007 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:37:29.028021 systemd[1]: Reached target machines.target - Containers. Apr 17 23:37:29.028035 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:37:29.028073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:37:29.028092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:37:29.028110 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:37:29.028126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:37:29.028140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:37:29.028154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:37:29.028168 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:37:29.028181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:37:29.028196 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:37:29.028210 kernel: ACPI: bus type drm_connector registered Apr 17 23:37:29.028226 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:37:29.028240 kernel: fuse: init (API version 7.39) Apr 17 23:37:29.028253 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:37:29.028266 kernel: loop: module loaded Apr 17 23:37:29.028281 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:37:29.028295 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:37:29.028309 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:37:29.028323 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:37:29.028337 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:37:29.028383 systemd-journald[1108]: Collecting audit messages is disabled. Apr 17 23:37:29.028411 systemd-journald[1108]: Journal started Apr 17 23:37:29.028442 systemd-journald[1108]: Runtime Journal (/run/log/journal/e37c7b9228da4e499e4d9ef0a043eeb8) is 8.0M, max 148.7M, 140.7M free. Apr 17 23:37:27.747269 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:37:27.773236 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 23:37:27.773859 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:37:29.052674 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:37:29.069208 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:37:29.092851 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:37:29.092955 systemd[1]: Stopped verity-setup.service. Apr 17 23:37:29.120085 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:29.129105 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:37:29.140728 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:37:29.151485 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:37:29.161478 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:37:29.171481 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:37:29.182535 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:37:29.192459 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:37:29.202665 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:37:29.215690 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:29.227625 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:37:29.227884 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:37:29.239676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:37:29.239935 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:37:29.251637 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:37:29.251896 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:37:29.262615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:37:29.262872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:37:29.274631 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:37:29.274881 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:37:29.285598 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:37:29.285845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:37:29.296688 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:29.306593 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:37:29.318605 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:37:29.330758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:29.355797 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:37:29.375243 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:37:29.386711 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:37:29.397240 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:37:29.397335 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:37:29.408514 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:37:29.432371 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:37:29.444680 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:37:29.454399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:37:29.466654 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:37:29.484406 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:37:29.495239 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:37:29.501276 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:37:29.510920 systemd-journald[1108]: Time spent on flushing to /var/log/journal/e37c7b9228da4e499e4d9ef0a043eeb8 is 152.903ms for 926 entries. Apr 17 23:37:29.510920 systemd-journald[1108]: System Journal (/var/log/journal/e37c7b9228da4e499e4d9ef0a043eeb8) is 8.0M, max 584.8M, 576.8M free. Apr 17 23:37:29.698295 systemd-journald[1108]: Received client request to flush runtime journal. Apr 17 23:37:29.698375 kernel: loop0: detected capacity change from 0 to 217752 Apr 17 23:37:29.521518 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:37:29.530913 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:37:29.552422 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:37:29.571277 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:37:29.591989 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:37:29.608530 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:37:29.628224 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:37:29.643633 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:37:29.655735 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:37:29.673528 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:37:29.694305 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:37:29.706961 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:37:29.720177 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:29.736432 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:37:29.757782 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:37:29.779269 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:37:29.783442 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:37:29.795127 kernel: loop1: detected capacity change from 0 to 54824 Apr 17 23:37:29.807798 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:37:29.832202 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:37:29.916608 kernel: loop2: detected capacity change from 0 to 142488 Apr 17 23:37:29.958798 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Apr 17 23:37:29.958835 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Apr 17 23:37:29.973579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:30.027104 kernel: loop3: detected capacity change from 0 to 140768 Apr 17 23:37:30.125101 kernel: loop4: detected capacity change from 0 to 217752 Apr 17 23:37:30.169105 kernel: loop5: detected capacity change from 0 to 54824 Apr 17 23:37:30.212534 kernel: loop6: detected capacity change from 0 to 142488 Apr 17 23:37:30.273099 kernel: loop7: detected capacity change from 0 to 140768 Apr 17 23:37:30.332871 (sd-merge)[1164]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Apr 17 23:37:30.333874 (sd-merge)[1164]: Merged extensions into '/usr'. Apr 17 23:37:30.349001 systemd[1]: Reloading requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:37:30.349446 systemd[1]: Reloading... Apr 17 23:37:30.498079 zram_generator::config[1188]: No configuration found. Apr 17 23:37:30.683914 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:37:30.784767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:37:30.879195 systemd[1]: Reloading finished in 527 ms. Apr 17 23:37:30.911222 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:37:30.921778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:37:30.933706 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:37:30.955391 systemd[1]: Starting ensure-sysext.service... Apr 17 23:37:30.972120 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:37:30.997393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:31.012713 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:37:31.012746 systemd[1]: Reloading... Apr 17 23:37:31.031296 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:37:31.032007 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:37:31.035917 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:37:31.039252 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 17 23:37:31.039529 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 17 23:37:31.051313 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:37:31.052094 systemd-tmpfiles[1232]: Skipping /boot Apr 17 23:37:31.122448 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Apr 17 23:37:31.123786 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:37:31.123801 systemd-tmpfiles[1232]: Skipping /boot Apr 17 23:37:31.168092 zram_generator::config[1265]: No configuration found. Apr 17 23:37:31.430150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:37:31.510105 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1290) Apr 17 23:37:31.641322 systemd[1]: Reloading finished in 627 ms. Apr 17 23:37:31.648073 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 17 23:37:31.648143 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 17 23:37:31.696155 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:31.702071 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 23:37:31.724077 kernel: EDAC MC: Ver: 3.0.0 Apr 17 23:37:31.726650 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:31.733201 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:37:31.733388 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 17 23:37:31.747855 kernel: ACPI: button: Sleep Button [SLPF] Apr 17 23:37:31.773911 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:37:31.784075 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:37:31.832850 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:37:31.849603 systemd[1]: Finished ensure-sysext.service. Apr 17 23:37:31.862954 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 17 23:37:31.878564 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:31.885298 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:37:31.900645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:37:31.912787 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:37:31.918294 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:37:31.937425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:37:31.957356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:37:31.967500 lvm[1343]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:37:31.975470 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:37:31.991482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:37:32.001322 augenrules[1356]: No rules Apr 17 23:37:32.012364 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:37:32.021421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:37:32.029533 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:37:32.047584 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:37:32.067284 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:37:32.079145 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:37:32.090184 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:37:32.107371 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:37:32.129361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:32.139288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:32.150325 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:37:32.160769 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:37:32.173734 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:37:32.174461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:37:32.174619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:37:32.175005 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:37:32.175335 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:37:32.175717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:37:32.175868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:37:32.176306 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:37:32.176514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:37:32.183062 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:37:32.184106 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:37:32.194684 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:37:32.201638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:32.207678 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:37:32.210617 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Apr 17 23:37:32.210713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:37:32.210809 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:37:32.214963 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:37:32.218801 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:37:32.218879 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:37:32.219735 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:37:32.234533 lvm[1383]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:37:32.292634 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:37:32.300157 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:37:32.323157 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Apr 17 23:37:32.336542 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:32.348566 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:37:32.439075 systemd-networkd[1365]: lo: Link UP Apr 17 23:37:32.439093 systemd-networkd[1365]: lo: Gained carrier Apr 17 23:37:32.441664 systemd-networkd[1365]: Enumeration completed Apr 17 23:37:32.441863 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:37:32.442846 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:32.442853 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:37:32.443772 systemd-networkd[1365]: eth0: Link UP Apr 17 23:37:32.443779 systemd-networkd[1365]: eth0: Gained carrier Apr 17 23:37:32.443804 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:32.450583 systemd-resolved[1366]: Positive Trust Anchors: Apr 17 23:37:32.450603 systemd-resolved[1366]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:37:32.450645 systemd-resolved[1366]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:37:32.454163 systemd-networkd[1365]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:37:32.454189 systemd-networkd[1365]: eth0: DHCPv4 address 10.128.0.110/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 17 23:37:32.461178 systemd-resolved[1366]: Defaulting to hostname 'linux'. Apr 17 23:37:32.462278 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:37:32.473365 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:37:32.483318 systemd[1]: Reached target network.target - Network. Apr 17 23:37:32.492221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:32.503308 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:37:32.513412 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:37:32.525330 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:37:32.536496 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:37:32.546418 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:37:32.558303 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:37:32.569275 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:37:32.569340 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:37:32.578254 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:37:32.587893 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:37:32.599978 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:37:32.612568 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:37:32.623102 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:37:32.633367 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:37:32.643242 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:37:32.652295 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:37:32.652345 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:37:32.664237 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:37:32.677097 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:37:32.694202 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:37:32.712249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:37:32.730435 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:37:32.740489 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:37:32.747316 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:37:32.763784 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:37:32.767080 jq[1416]: false Apr 17 23:37:32.767223 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:37:32.789308 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:37:32.809349 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:37:32.835350 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:37:32.836460 extend-filesystems[1417]: Found loop4 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found loop5 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found loop6 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found loop7 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found sda Apr 17 23:37:32.850315 extend-filesystems[1417]: Found sda1 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found sda2 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found sda3 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found usr Apr 17 23:37:32.850315 extend-filesystems[1417]: Found sda4 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found sda6 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found sda7 Apr 17 23:37:32.850315 extend-filesystems[1417]: Found sda9 Apr 17 23:37:32.850315 extend-filesystems[1417]: Checking size of /dev/sda9 Apr 17 23:37:33.007263 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Apr 17 23:37:32.845924 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: ---------------------------------------------------- Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: corporation. Support and training for ntp-4 are Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: available at https://www.nwtime.org/support Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: ---------------------------------------------------- Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: proto: precision = 0.096 usec (-23) Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: basedate set to 2026-04-05 Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: gps base set to 2026-04-05 (week 2413) Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: Listen normally on 3 eth0 10.128.0.110:123 Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: Listen normally on 4 lo [::1]:123 Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: bind(21) AF_INET6 fe80::4001:aff:fe80:6e%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:6e%2#123 Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: failed to init interface for address fe80::4001:aff:fe80:6e%2 Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: Listening on routing socket on fd #21 for interface updates Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:37:33.007515 ntpd[1421]: 17 Apr 23:37:32 ntpd[1421]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:37:33.016145 coreos-metadata[1414]: Apr 17 23:37:32.877 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Apr 17 23:37:33.016145 coreos-metadata[1414]: Apr 17 23:37:32.879 INFO Fetch successful Apr 17 23:37:33.016145 coreos-metadata[1414]: Apr 17 23:37:32.879 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Apr 17 23:37:33.016145 coreos-metadata[1414]: Apr 17 23:37:32.880 INFO Fetch successful Apr 17 23:37:33.016145 coreos-metadata[1414]: Apr 17 23:37:32.880 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Apr 17 23:37:33.016145 coreos-metadata[1414]: Apr 17 23:37:32.883 INFO Fetch successful Apr 17 23:37:33.016145 coreos-metadata[1414]: Apr 17 23:37:32.883 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Apr 17 23:37:33.016145 coreos-metadata[1414]: Apr 17 23:37:32.884 INFO Fetch successful Apr 17 23:37:33.016653 extend-filesystems[1417]: Resized partition /dev/sda9 Apr 17 23:37:33.069862 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Apr 17 23:37:32.915784 ntpd[1421]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:37:32.847508 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:37:33.070668 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:37:33.135367 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1291) Apr 17 23:37:32.915825 ntpd[1421]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:37:32.854266 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:37:33.135820 extend-filesystems[1442]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 17 23:37:33.135820 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:37:33.135820 extend-filesystems[1442]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Apr 17 23:37:32.915841 ntpd[1421]: ---------------------------------------------------- Apr 17 23:37:33.187954 update_engine[1435]: I20260417 23:37:33.024144 1435 main.cc:92] Flatcar Update Engine starting Apr 17 23:37:33.187954 update_engine[1435]: I20260417 23:37:33.044337 1435 update_check_scheduler.cc:74] Next update check in 6m56s Apr 17 23:37:32.919223 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:37:33.193572 extend-filesystems[1417]: Resized filesystem in /dev/sda9 Apr 17 23:37:32.915855 ntpd[1421]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:37:33.209399 jq[1444]: true Apr 17 23:37:32.940479 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:37:32.915870 ntpd[1421]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:37:32.966523 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:37:32.915887 ntpd[1421]: corporation. Support and training for ntp-4 are Apr 17 23:37:32.967109 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:37:32.915901 ntpd[1421]: available at https://www.nwtime.org/support Apr 17 23:37:32.967796 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:37:32.915914 ntpd[1421]: ---------------------------------------------------- Apr 17 23:37:33.211905 jq[1450]: true Apr 17 23:37:32.968628 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:37:32.921106 ntpd[1421]: proto: precision = 0.096 usec (-23) Apr 17 23:37:33.006787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:37:32.921927 dbus-daemon[1415]: [system] SELinux support is enabled Apr 17 23:37:33.008202 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:37:32.922663 ntpd[1421]: basedate set to 2026-04-05 Apr 17 23:37:33.046357 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:37:32.922688 ntpd[1421]: gps base set to 2026-04-05 (week 2413) Apr 17 23:37:33.067017 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:37:32.926524 ntpd[1421]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:37:33.067126 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:37:32.928523 ntpd[1421]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:37:33.104316 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:37:32.928951 ntpd[1421]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:37:33.105799 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:37:32.929286 ntpd[1421]: Listen normally on 3 eth0 10.128.0.110:123 Apr 17 23:37:33.114222 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:37:33.279307 tar[1449]: linux-amd64/LICENSE Apr 17 23:37:32.929417 ntpd[1421]: Listen normally on 4 lo [::1]:123 Apr 17 23:37:33.114277 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:37:32.929548 ntpd[1421]: bind(21) AF_INET6 fe80::4001:aff:fe80:6e%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:37:33.118090 systemd-logind[1428]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 23:37:32.929591 ntpd[1421]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:6e%2#123 Apr 17 23:37:33.118125 systemd-logind[1428]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 17 23:37:33.294613 tar[1449]: linux-amd64/helm Apr 17 23:37:32.929614 ntpd[1421]: failed to init interface for address fe80::4001:aff:fe80:6e%2 Apr 17 23:37:33.118159 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:37:32.929669 ntpd[1421]: Listening on routing socket on fd #21 for interface updates Apr 17 23:37:33.121303 systemd-logind[1428]: New seat seat0. Apr 17 23:37:32.930718 dbus-daemon[1415]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1365 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:37:33.134193 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:37:32.940105 ntpd[1421]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:37:33.155629 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:37:32.940152 ntpd[1421]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:37:33.167384 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:37:33.045952 dbus-daemon[1415]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:37:33.167691 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:37:33.299607 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:37:33.374897 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:37:33.383196 dbus-daemon[1415]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:37:33.384318 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:37:33.388320 dbus-daemon[1415]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1460 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:37:33.409995 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:37:33.482929 polkitd[1478]: Started polkitd version 121 Apr 17 23:37:33.489684 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:37:33.495394 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:37:33.504921 polkitd[1478]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:37:33.505036 polkitd[1478]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:37:33.519110 systemd[1]: Starting sshkeys.service... Apr 17 23:37:33.521930 polkitd[1478]: Finished loading, compiling and executing 2 rules Apr 17 23:37:33.529460 dbus-daemon[1415]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:37:33.530227 polkitd[1478]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:37:33.530409 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:37:33.579571 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:37:33.602431 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:37:33.628451 systemd-hostnamed[1460]: Hostname set to (transient) Apr 17 23:37:33.629240 systemd-resolved[1366]: System hostname changed to 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1'. Apr 17 23:37:33.632544 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:37:33.668261 systemd-networkd[1365]: eth0: Gained IPv6LL Apr 17 23:37:33.676862 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:37:33.688509 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:37:33.710702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:33.733199 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:37:33.753345 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Apr 17 23:37:33.761666 coreos-metadata[1504]: Apr 17 23:37:33.759 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Apr 17 23:37:33.761666 coreos-metadata[1504]: Apr 17 23:37:33.761 INFO Fetch failed with 404: resource not found Apr 17 23:37:33.761666 coreos-metadata[1504]: Apr 17 23:37:33.761 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Apr 17 23:37:33.774000 unknown[1504]: wrote ssh authorized keys file for user: core Apr 17 23:37:33.775193 coreos-metadata[1504]: Apr 17 23:37:33.774 INFO Fetch successful Apr 17 23:37:33.775193 coreos-metadata[1504]: Apr 17 23:37:33.774 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Apr 17 23:37:33.775193 coreos-metadata[1504]: Apr 17 23:37:33.774 INFO Fetch failed with 404: resource not found Apr 17 23:37:33.775193 coreos-metadata[1504]: Apr 17 23:37:33.774 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Apr 17 23:37:33.775193 coreos-metadata[1504]: Apr 17 23:37:33.774 INFO Fetch failed with 404: resource not found Apr 17 23:37:33.775193 coreos-metadata[1504]: Apr 17 23:37:33.774 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Apr 17 23:37:33.775193 coreos-metadata[1504]: Apr 17 23:37:33.774 INFO Fetch successful Apr 17 23:37:33.818040 init.sh[1511]: + '[' -e /etc/default/instance_configs.cfg.template ']' Apr 17 23:37:33.825820 init.sh[1511]: + echo -e '[InstanceSetup]\nset_host_keys = false' Apr 17 23:37:33.825820 init.sh[1511]: + /usr/bin/google_instance_setup Apr 17 23:37:33.856201 update-ssh-keys[1515]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:37:33.859944 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:37:33.873464 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:37:33.884193 systemd[1]: Finished sshkeys.service. Apr 17 23:37:34.026075 containerd[1457]: time="2026-04-17T23:37:34.021302493Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:37:34.197157 containerd[1457]: time="2026-04-17T23:37:34.197090496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:34.206026 containerd[1457]: time="2026-04-17T23:37:34.204914034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:34.206026 containerd[1457]: time="2026-04-17T23:37:34.204999632Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:37:34.206026 containerd[1457]: time="2026-04-17T23:37:34.205068204Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:37:34.206026 containerd[1457]: time="2026-04-17T23:37:34.205387456Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:37:34.206026 containerd[1457]: time="2026-04-17T23:37:34.205423834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:34.206026 containerd[1457]: time="2026-04-17T23:37:34.205562546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:34.206026 containerd[1457]: time="2026-04-17T23:37:34.205587401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:34.209005 containerd[1457]: time="2026-04-17T23:37:34.207000207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:34.209005 containerd[1457]: time="2026-04-17T23:37:34.207036520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:34.209005 containerd[1457]: time="2026-04-17T23:37:34.207115760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:34.209005 containerd[1457]: time="2026-04-17T23:37:34.207135427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:34.209005 containerd[1457]: time="2026-04-17T23:37:34.207305704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:34.209005 containerd[1457]: time="2026-04-17T23:37:34.207901347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:34.209005 containerd[1457]: time="2026-04-17T23:37:34.208623033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:34.209005 containerd[1457]: time="2026-04-17T23:37:34.208654160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:37:34.209450 containerd[1457]: time="2026-04-17T23:37:34.209200950Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:37:34.209450 containerd[1457]: time="2026-04-17T23:37:34.209309914Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:37:34.223074 containerd[1457]: time="2026-04-17T23:37:34.221561279Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:37:34.223074 containerd[1457]: time="2026-04-17T23:37:34.221640213Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:37:34.223074 containerd[1457]: time="2026-04-17T23:37:34.221667688Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:37:34.223074 containerd[1457]: time="2026-04-17T23:37:34.221698733Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:37:34.223074 containerd[1457]: time="2026-04-17T23:37:34.221723973Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:37:34.223074 containerd[1457]: time="2026-04-17T23:37:34.221973952Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:37:34.223423 containerd[1457]: time="2026-04-17T23:37:34.223255493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:37:34.223477 containerd[1457]: time="2026-04-17T23:37:34.223435404Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:37:34.223477 containerd[1457]: time="2026-04-17T23:37:34.223462800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:37:34.223576 containerd[1457]: time="2026-04-17T23:37:34.223496914Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:37:34.223576 containerd[1457]: time="2026-04-17T23:37:34.223522796Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:37:34.223576 containerd[1457]: time="2026-04-17T23:37:34.223546266Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:37:34.223576 containerd[1457]: time="2026-04-17T23:37:34.223568572Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:37:34.223745 containerd[1457]: time="2026-04-17T23:37:34.223591180Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:37:34.223745 containerd[1457]: time="2026-04-17T23:37:34.223615936Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:37:34.223745 containerd[1457]: time="2026-04-17T23:37:34.223639665Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:37:34.223745 containerd[1457]: time="2026-04-17T23:37:34.223661354Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:37:34.223745 containerd[1457]: time="2026-04-17T23:37:34.223684438Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:37:34.223745 containerd[1457]: time="2026-04-17T23:37:34.223716452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.223745 containerd[1457]: time="2026-04-17T23:37:34.223740086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223764063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223787644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223808541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223831004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223862403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223886228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223914530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223939541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223959812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.223983268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.224005467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.224031933Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.225217194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.225252859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226090 containerd[1457]: time="2026-04-17T23:37:34.225278414Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:37:34.226785 containerd[1457]: time="2026-04-17T23:37:34.226099945Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:37:34.226785 containerd[1457]: time="2026-04-17T23:37:34.226217206Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:37:34.226785 containerd[1457]: time="2026-04-17T23:37:34.226242323Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:37:34.226785 containerd[1457]: time="2026-04-17T23:37:34.226265053Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:37:34.226785 containerd[1457]: time="2026-04-17T23:37:34.226286451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.226785 containerd[1457]: time="2026-04-17T23:37:34.226309323Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:37:34.226785 containerd[1457]: time="2026-04-17T23:37:34.226329331Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:37:34.226785 containerd[1457]: time="2026-04-17T23:37:34.226348320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:37:34.228127 containerd[1457]: time="2026-04-17T23:37:34.226791853Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:37:34.228127 containerd[1457]: time="2026-04-17T23:37:34.226897298Z" level=info msg="Connect containerd service" Apr 17 23:37:34.228127 containerd[1457]: time="2026-04-17T23:37:34.226952957Z" level=info msg="using legacy CRI server" Apr 17 23:37:34.228127 containerd[1457]: time="2026-04-17T23:37:34.226965199Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:37:34.228127 containerd[1457]: time="2026-04-17T23:37:34.227154853Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:37:34.231828 containerd[1457]: time="2026-04-17T23:37:34.231622833Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:37:34.231960 containerd[1457]: time="2026-04-17T23:37:34.231846229Z" level=info msg="Start subscribing containerd event" Apr 17 23:37:34.231960 containerd[1457]: time="2026-04-17T23:37:34.231926104Z" level=info msg="Start recovering state" Apr 17 23:37:34.232742 containerd[1457]: time="2026-04-17T23:37:34.232017060Z" level=info msg="Start event monitor" Apr 17 23:37:34.232742 containerd[1457]: time="2026-04-17T23:37:34.232062211Z" level=info msg="Start snapshots syncer" Apr 17 23:37:34.232742 containerd[1457]: time="2026-04-17T23:37:34.232079289Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:37:34.232742 containerd[1457]: time="2026-04-17T23:37:34.232092190Z" level=info msg="Start streaming server" Apr 17 23:37:34.235943 containerd[1457]: time="2026-04-17T23:37:34.235890495Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:37:34.236352 containerd[1457]: time="2026-04-17T23:37:34.235973893Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:37:34.236352 containerd[1457]: time="2026-04-17T23:37:34.236078603Z" level=info msg="containerd successfully booted in 0.225713s" Apr 17 23:37:34.236211 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:37:34.892741 tar[1449]: linux-amd64/README.md Apr 17 23:37:34.914279 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:37:34.925439 instance-setup[1517]: INFO Running google_set_multiqueue. Apr 17 23:37:34.958606 instance-setup[1517]: INFO Set channels for eth0 to 2. Apr 17 23:37:34.964485 instance-setup[1517]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Apr 17 23:37:34.967036 instance-setup[1517]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Apr 17 23:37:34.967456 instance-setup[1517]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Apr 17 23:37:34.969117 instance-setup[1517]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Apr 17 23:37:34.969984 instance-setup[1517]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Apr 17 23:37:34.972206 instance-setup[1517]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Apr 17 23:37:34.972261 instance-setup[1517]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Apr 17 23:37:34.974236 instance-setup[1517]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Apr 17 23:37:34.985784 instance-setup[1517]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 17 23:37:34.992317 instance-setup[1517]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 17 23:37:34.996770 instance-setup[1517]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Apr 17 23:37:34.996824 instance-setup[1517]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Apr 17 23:37:35.035919 init.sh[1511]: + /usr/bin/google_metadata_script_runner --script-type startup Apr 17 23:37:35.253515 startup-script[1562]: INFO Starting startup scripts. Apr 17 23:37:35.264108 startup-script[1562]: INFO No startup scripts found in metadata. Apr 17 23:37:35.264190 startup-script[1562]: INFO Finished running startup scripts. Apr 17 23:37:35.310113 init.sh[1511]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Apr 17 23:37:35.310288 init.sh[1511]: + daemon_pids=() Apr 17 23:37:35.310288 init.sh[1511]: + for d in accounts clock_skew network Apr 17 23:37:35.312074 init.sh[1511]: + daemon_pids+=($!) Apr 17 23:37:35.312074 init.sh[1511]: + for d in accounts clock_skew network Apr 17 23:37:35.312074 init.sh[1511]: + daemon_pids+=($!) Apr 17 23:37:35.312074 init.sh[1511]: + for d in accounts clock_skew network Apr 17 23:37:35.312320 init.sh[1565]: + /usr/bin/google_accounts_daemon Apr 17 23:37:35.312702 init.sh[1511]: + daemon_pids+=($!) Apr 17 23:37:35.312702 init.sh[1511]: + NOTIFY_SOCKET=/run/systemd/notify Apr 17 23:37:35.312702 init.sh[1511]: + /usr/bin/systemd-notify --ready Apr 17 23:37:35.313104 init.sh[1566]: + /usr/bin/google_clock_skew_daemon Apr 17 23:37:35.316739 init.sh[1567]: + /usr/bin/google_network_daemon Apr 17 23:37:35.331107 systemd[1]: Started oem-gce.service - GCE Linux Agent. Apr 17 23:37:35.345678 init.sh[1511]: + wait -n 1565 1566 1567 Apr 17 23:37:35.766687 google-networking[1567]: INFO Starting Google Networking daemon. Apr 17 23:37:35.916476 ntpd[1421]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:6e%2]:123 Apr 17 23:37:35.917499 ntpd[1421]: 17 Apr 23:37:35 ntpd[1421]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:6e%2]:123 Apr 17 23:37:35.953346 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:37:35.987168 google-clock-skew[1566]: INFO Starting Google Clock Skew daemon. Apr 17 23:37:36.000566 google-clock-skew[1566]: INFO Clock drift token has changed: 0. Apr 17 23:37:36.019643 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:37:36.037428 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:37:36.069880 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:37:36.070232 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:37:36.078223 groupadd[1584]: group added to /etc/group: name=google-sudoers, GID=1000 Apr 17 23:37:36.086336 groupadd[1584]: group added to /etc/gshadow: name=google-sudoers Apr 17 23:37:36.087497 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:37:36.122804 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:37:36.141294 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:37:36.157971 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:37:36.163798 groupadd[1584]: new group: name=google-sudoers, GID=1000 Apr 17 23:37:36.168532 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:37:36.198669 google-accounts[1565]: INFO Starting Google Accounts daemon. Apr 17 23:37:36.211042 google-accounts[1565]: WARNING OS Login not installed. Apr 17 23:37:36.212833 google-accounts[1565]: INFO Creating a new user account for 0. Apr 17 23:37:36.216727 init.sh[1601]: useradd: invalid user name '0': use --badname to ignore Apr 17 23:37:36.217466 google-accounts[1565]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Apr 17 23:37:36.255959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:36.268489 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:37:36.278750 systemd[1]: Startup finished in 1.043s (kernel) + 25.042s (initrd) + 9.540s (userspace) = 35.626s. Apr 17 23:37:36.280685 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:37:36.000096 systemd-resolved[1366]: Clock change detected. Flushing caches. Apr 17 23:37:36.013825 systemd-journald[1108]: Time jumped backwards, rotating. Apr 17 23:37:36.004747 google-clock-skew[1566]: INFO Synced system time with hardware clock. Apr 17 23:37:36.581685 kubelet[1608]: E0417 23:37:36.581610 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:37:36.584869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:37:36.585141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:37:36.585778 systemd[1]: kubelet.service: Consumed 1.235s CPU time. Apr 17 23:37:40.475657 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:37:40.481132 systemd[1]: Started sshd@0-10.128.0.110:22-50.85.169.122:53028.service - OpenSSH per-connection server daemon (50.85.169.122:53028). Apr 17 23:37:41.169504 sshd[1622]: Accepted publickey for core from 50.85.169.122 port 53028 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:37:41.173500 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:41.186099 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:37:41.191931 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:37:41.196909 systemd-logind[1428]: New session 1 of user core. Apr 17 23:37:41.213753 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:37:41.221010 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:37:41.243154 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:37:41.382282 systemd[1626]: Queued start job for default target default.target. Apr 17 23:37:41.393013 systemd[1626]: Created slice app.slice - User Application Slice. Apr 17 23:37:41.393061 systemd[1626]: Reached target paths.target - Paths. Apr 17 23:37:41.393085 systemd[1626]: Reached target timers.target - Timers. Apr 17 23:37:41.394833 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:37:41.409639 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:37:41.409833 systemd[1626]: Reached target sockets.target - Sockets. Apr 17 23:37:41.409870 systemd[1626]: Reached target basic.target - Basic System. Apr 17 23:37:41.409939 systemd[1626]: Reached target default.target - Main User Target. Apr 17 23:37:41.409994 systemd[1626]: Startup finished in 157ms. Apr 17 23:37:41.410129 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:37:41.422776 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:37:41.907846 systemd[1]: Started sshd@1-10.128.0.110:22-50.85.169.122:53030.service - OpenSSH per-connection server daemon (50.85.169.122:53030). Apr 17 23:37:42.575802 sshd[1637]: Accepted publickey for core from 50.85.169.122 port 53030 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:37:42.577655 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:42.584319 systemd-logind[1428]: New session 2 of user core. Apr 17 23:37:42.593712 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:37:43.047173 sshd[1637]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:43.052745 systemd[1]: sshd@1-10.128.0.110:22-50.85.169.122:53030.service: Deactivated successfully. Apr 17 23:37:43.055043 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:37:43.056234 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:37:43.057747 systemd-logind[1428]: Removed session 2. Apr 17 23:37:43.170834 systemd[1]: Started sshd@2-10.128.0.110:22-50.85.169.122:53046.service - OpenSSH per-connection server daemon (50.85.169.122:53046). Apr 17 23:37:43.855877 sshd[1644]: Accepted publickey for core from 50.85.169.122 port 53046 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:37:43.857751 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:43.864850 systemd-logind[1428]: New session 3 of user core. Apr 17 23:37:43.870753 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:37:44.321582 sshd[1644]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:44.326918 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:37:44.327743 systemd[1]: sshd@2-10.128.0.110:22-50.85.169.122:53046.service: Deactivated successfully. Apr 17 23:37:44.330308 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:37:44.331545 systemd-logind[1428]: Removed session 3. Apr 17 23:37:44.442874 systemd[1]: Started sshd@3-10.128.0.110:22-50.85.169.122:53058.service - OpenSSH per-connection server daemon (50.85.169.122:53058). Apr 17 23:37:45.124836 sshd[1651]: Accepted publickey for core from 50.85.169.122 port 53058 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:37:45.126714 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:45.133058 systemd-logind[1428]: New session 4 of user core. Apr 17 23:37:45.142742 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:37:45.598433 sshd[1651]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:45.603950 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:37:45.604933 systemd[1]: sshd@3-10.128.0.110:22-50.85.169.122:53058.service: Deactivated successfully. Apr 17 23:37:45.607388 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:37:45.608640 systemd-logind[1428]: Removed session 4. Apr 17 23:37:45.717852 systemd[1]: Started sshd@4-10.128.0.110:22-50.85.169.122:53070.service - OpenSSH per-connection server daemon (50.85.169.122:53070). Apr 17 23:37:46.385582 sshd[1658]: Accepted publickey for core from 50.85.169.122 port 53070 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:37:46.387563 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:46.394487 systemd-logind[1428]: New session 5 of user core. Apr 17 23:37:46.399738 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:37:46.638370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:37:46.643781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:46.773785 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:37:46.774358 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:37:46.794391 sudo[1664]: pam_unix(sudo:session): session closed for user root Apr 17 23:37:46.902874 sshd[1658]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:46.912649 systemd[1]: sshd@4-10.128.0.110:22-50.85.169.122:53070.service: Deactivated successfully. Apr 17 23:37:46.916580 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:37:46.918210 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:37:46.920209 systemd-logind[1428]: Removed session 5. Apr 17 23:37:46.953105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:46.969070 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:37:47.026900 systemd[1]: Started sshd@5-10.128.0.110:22-50.85.169.122:53082.service - OpenSSH per-connection server daemon (50.85.169.122:53082). Apr 17 23:37:47.037397 kubelet[1673]: E0417 23:37:47.037345 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:37:47.043216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:37:47.044128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:37:47.712510 sshd[1680]: Accepted publickey for core from 50.85.169.122 port 53082 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:37:47.713725 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:47.719411 systemd-logind[1428]: New session 6 of user core. Apr 17 23:37:47.730787 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:37:48.089884 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:37:48.090404 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:37:48.097289 sudo[1686]: pam_unix(sudo:session): session closed for user root Apr 17 23:37:48.115038 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:37:48.115652 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:37:48.134030 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:37:48.141306 auditctl[1689]: No rules Apr 17 23:37:48.143226 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:37:48.143861 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:37:48.152346 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:37:48.204181 augenrules[1708]: No rules Apr 17 23:37:48.206363 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:37:48.208085 sudo[1685]: pam_unix(sudo:session): session closed for user root Apr 17 23:37:48.317887 sshd[1680]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:48.322844 systemd[1]: sshd@5-10.128.0.110:22-50.85.169.122:53082.service: Deactivated successfully. Apr 17 23:37:48.325327 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:37:48.327270 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:37:48.328803 systemd-logind[1428]: Removed session 6. Apr 17 23:37:48.444095 systemd[1]: Started sshd@6-10.128.0.110:22-50.85.169.122:53086.service - OpenSSH per-connection server daemon (50.85.169.122:53086). Apr 17 23:37:49.117352 sshd[1716]: Accepted publickey for core from 50.85.169.122 port 53086 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:37:49.119539 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:49.129564 systemd-logind[1428]: New session 7 of user core. Apr 17 23:37:49.137930 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:37:49.493655 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:37:49.494251 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:37:49.989264 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:37:50.001159 (dockerd)[1735]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:37:50.458387 dockerd[1735]: time="2026-04-17T23:37:50.458242091Z" level=info msg="Starting up" Apr 17 23:37:50.580965 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2827843405-merged.mount: Deactivated successfully. Apr 17 23:37:50.613996 dockerd[1735]: time="2026-04-17T23:37:50.613907141Z" level=info msg="Loading containers: start." Apr 17 23:37:50.771494 kernel: Initializing XFRM netlink socket Apr 17 23:37:50.900045 systemd-networkd[1365]: docker0: Link UP Apr 17 23:37:50.923840 dockerd[1735]: time="2026-04-17T23:37:50.923774719Z" level=info msg="Loading containers: done." Apr 17 23:37:50.945391 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3360065941-merged.mount: Deactivated successfully. Apr 17 23:37:50.948047 dockerd[1735]: time="2026-04-17T23:37:50.947976098Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:37:50.948176 dockerd[1735]: time="2026-04-17T23:37:50.948126205Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:37:50.949523 dockerd[1735]: time="2026-04-17T23:37:50.948664165Z" level=info msg="Daemon has completed initialization" Apr 17 23:37:50.994847 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:37:50.995321 dockerd[1735]: time="2026-04-17T23:37:50.995224475Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:37:51.810819 containerd[1457]: time="2026-04-17T23:37:51.810445651Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 17 23:37:52.410879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1212824382.mount: Deactivated successfully. Apr 17 23:37:53.951901 containerd[1457]: time="2026-04-17T23:37:53.951824976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:53.953635 containerd[1457]: time="2026-04-17T23:37:53.953553096Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27580254" Apr 17 23:37:53.955248 containerd[1457]: time="2026-04-17T23:37:53.954558436Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:53.959393 containerd[1457]: time="2026-04-17T23:37:53.958768187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:53.960405 containerd[1457]: time="2026-04-17T23:37:53.960357065Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 2.149829373s" Apr 17 23:37:53.960538 containerd[1457]: time="2026-04-17T23:37:53.960414353Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 17 23:37:53.961182 containerd[1457]: time="2026-04-17T23:37:53.961103203Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 17 23:37:55.370460 containerd[1457]: time="2026-04-17T23:37:55.370368769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:55.372071 containerd[1457]: time="2026-04-17T23:37:55.372002164Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451905" Apr 17 23:37:55.373782 containerd[1457]: time="2026-04-17T23:37:55.373261266Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:55.376920 containerd[1457]: time="2026-04-17T23:37:55.376864755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:55.378483 containerd[1457]: time="2026-04-17T23:37:55.378419859Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.417262711s" Apr 17 23:37:55.378635 containerd[1457]: time="2026-04-17T23:37:55.378606686Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 17 23:37:55.379578 containerd[1457]: time="2026-04-17T23:37:55.379407099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 17 23:37:56.460470 containerd[1457]: time="2026-04-17T23:37:56.460372223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:56.462242 containerd[1457]: time="2026-04-17T23:37:56.462170410Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555518" Apr 17 23:37:56.464524 containerd[1457]: time="2026-04-17T23:37:56.464475465Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:56.471482 containerd[1457]: time="2026-04-17T23:37:56.469555546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:56.471482 containerd[1457]: time="2026-04-17T23:37:56.471414843Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.091705463s" Apr 17 23:37:56.471734 containerd[1457]: time="2026-04-17T23:37:56.471702334Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 17 23:37:56.472495 containerd[1457]: time="2026-04-17T23:37:56.472318184Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 17 23:37:57.138912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:37:57.145109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:57.459793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:57.468490 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:37:57.550477 kubelet[1951]: E0417 23:37:57.549148 1951 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:37:57.554119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:37:57.554582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:37:57.739281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1947489448.mount: Deactivated successfully. Apr 17 23:37:58.256627 containerd[1457]: time="2026-04-17T23:37:58.256552420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:58.258115 containerd[1457]: time="2026-04-17T23:37:58.258035034Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25700132" Apr 17 23:37:58.259608 containerd[1457]: time="2026-04-17T23:37:58.259516373Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:58.262549 containerd[1457]: time="2026-04-17T23:37:58.262501911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:58.263696 containerd[1457]: time="2026-04-17T23:37:58.263497484Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.790785558s" Apr 17 23:37:58.263696 containerd[1457]: time="2026-04-17T23:37:58.263545392Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 17 23:37:58.264601 containerd[1457]: time="2026-04-17T23:37:58.264352426Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 17 23:37:58.736583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877105639.mount: Deactivated successfully. Apr 17 23:38:00.224892 containerd[1457]: time="2026-04-17T23:38:00.224818080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:00.226639 containerd[1457]: time="2026-04-17T23:38:00.226573330Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23557388" Apr 17 23:38:00.228494 containerd[1457]: time="2026-04-17T23:38:00.227615415Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:00.231494 containerd[1457]: time="2026-04-17T23:38:00.231344004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:00.233422 containerd[1457]: time="2026-04-17T23:38:00.233363170Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.968971467s" Apr 17 23:38:00.233422 containerd[1457]: time="2026-04-17T23:38:00.233405343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 17 23:38:00.234711 containerd[1457]: time="2026-04-17T23:38:00.234476619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:38:00.697909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437858318.mount: Deactivated successfully. Apr 17 23:38:00.709570 containerd[1457]: time="2026-04-17T23:38:00.709496580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:00.711952 containerd[1457]: time="2026-04-17T23:38:00.711864730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321308" Apr 17 23:38:00.714004 containerd[1457]: time="2026-04-17T23:38:00.713925521Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:00.719112 containerd[1457]: time="2026-04-17T23:38:00.717785715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:00.719112 containerd[1457]: time="2026-04-17T23:38:00.718909613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 484.384943ms" Apr 17 23:38:00.719112 containerd[1457]: time="2026-04-17T23:38:00.718951473Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:38:00.719978 containerd[1457]: time="2026-04-17T23:38:00.719947482Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 17 23:38:01.238680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533151873.mount: Deactivated successfully. Apr 17 23:38:02.469777 containerd[1457]: time="2026-04-17T23:38:02.469695950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:02.471496 containerd[1457]: time="2026-04-17T23:38:02.471346129Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23645184" Apr 17 23:38:02.473636 containerd[1457]: time="2026-04-17T23:38:02.473580406Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:02.478166 containerd[1457]: time="2026-04-17T23:38:02.477850252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:02.479562 containerd[1457]: time="2026-04-17T23:38:02.479350176Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.759242121s" Apr 17 23:38:02.479562 containerd[1457]: time="2026-04-17T23:38:02.479403828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 17 23:38:03.197016 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:38:04.293371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:04.300900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:04.346553 systemd[1]: Reloading requested from client PID 2112 ('systemctl') (unit session-7.scope)... Apr 17 23:38:04.346768 systemd[1]: Reloading... Apr 17 23:38:04.509488 zram_generator::config[2152]: No configuration found. Apr 17 23:38:04.681388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:38:04.785733 systemd[1]: Reloading finished in 438 ms. Apr 17 23:38:04.857650 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:04.864251 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:38:04.864582 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:04.870857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:05.199614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:05.216118 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:38:05.272817 kubelet[2205]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:38:05.598580 kubelet[2205]: I0417 23:38:05.597903 2205 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:38:05.598580 kubelet[2205]: I0417 23:38:05.597967 2205 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:38:05.598580 kubelet[2205]: I0417 23:38:05.597992 2205 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:38:05.598580 kubelet[2205]: I0417 23:38:05.598001 2205 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:38:05.599302 kubelet[2205]: I0417 23:38:05.598974 2205 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:38:05.608574 kubelet[2205]: E0417 23:38:05.608511 2205 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:38:05.609359 kubelet[2205]: I0417 23:38:05.609157 2205 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:38:05.613051 kubelet[2205]: E0417 23:38:05.612999 2205 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:38:05.613260 kubelet[2205]: I0417 23:38:05.613235 2205 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:38:05.617211 kubelet[2205]: I0417 23:38:05.617178 2205 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:38:05.618761 kubelet[2205]: I0417 23:38:05.618697 2205 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:38:05.619021 kubelet[2205]: I0417 23:38:05.618746 2205 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:38:05.619021 kubelet[2205]: I0417 23:38:05.619010 2205 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:38:05.619253 kubelet[2205]: I0417 23:38:05.619026 2205 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:38:05.619253 kubelet[2205]: I0417 23:38:05.619155 2205 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:38:05.621677 kubelet[2205]: I0417 23:38:05.621644 2205 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:38:05.621913 kubelet[2205]: I0417 23:38:05.621898 2205 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:38:05.622002 kubelet[2205]: I0417 23:38:05.621922 2205 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:38:05.622002 kubelet[2205]: I0417 23:38:05.621960 2205 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:38:05.622002 kubelet[2205]: I0417 23:38:05.621978 2205 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:38:05.626530 kubelet[2205]: I0417 23:38:05.625943 2205 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:38:05.630246 kubelet[2205]: I0417 23:38:05.630216 2205 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:38:05.630435 kubelet[2205]: I0417 23:38:05.630418 2205 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:38:05.630630 kubelet[2205]: W0417 23:38:05.630615 2205 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:38:05.648095 kubelet[2205]: I0417 23:38:05.648057 2205 server.go:1257] "Started kubelet" Apr 17 23:38:05.650878 kubelet[2205]: I0417 23:38:05.650825 2205 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:38:05.660542 kubelet[2205]: I0417 23:38:05.660439 2205 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:38:05.660542 kubelet[2205]: I0417 23:38:05.660571 2205 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:38:05.660994 kubelet[2205]: I0417 23:38:05.660959 2205 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:38:05.663143 kubelet[2205]: I0417 23:38:05.663113 2205 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:38:05.668439 kubelet[2205]: I0417 23:38:05.666215 2205 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:38:05.670258 kubelet[2205]: E0417 23:38:05.670226 2205 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:05.670389 kubelet[2205]: I0417 23:38:05.670288 2205 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:38:05.674332 kubelet[2205]: I0417 23:38:05.674307 2205 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:38:05.674616 kubelet[2205]: I0417 23:38:05.674599 2205 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:38:05.674768 kubelet[2205]: I0417 23:38:05.674755 2205 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:38:05.675844 kubelet[2205]: E0417 23:38:05.672763 2205 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1.18a74938a4829d58 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1,},FirstTimestamp:2026-04-17 23:38:05.647969624 +0000 UTC m=+0.427087471,LastTimestamp:2026-04-17 23:38:05.647969624 +0000 UTC m=+0.427087471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1,}" Apr 17 23:38:05.676062 kubelet[2205]: E0417 23:38:05.675809 2205 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1?timeout=10s\": dial tcp 10.128.0.110:6443: connect: connection refused" interval="200ms" Apr 17 23:38:05.677082 kubelet[2205]: I0417 23:38:05.677041 2205 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:38:05.677289 kubelet[2205]: I0417 23:38:05.677251 2205 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:38:05.682213 kubelet[2205]: I0417 23:38:05.682182 2205 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:38:05.687041 kubelet[2205]: I0417 23:38:05.686933 2205 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:38:05.687946 kubelet[2205]: E0417 23:38:05.687914 2205 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:38:05.715867 kubelet[2205]: I0417 23:38:05.715829 2205 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:38:05.716201 kubelet[2205]: I0417 23:38:05.716172 2205 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:38:05.716293 kubelet[2205]: I0417 23:38:05.716214 2205 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:38:05.721727 kubelet[2205]: I0417 23:38:05.721672 2205 policy_none.go:50] "Start" Apr 17 23:38:05.721727 kubelet[2205]: I0417 23:38:05.721709 2205 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:38:05.721727 kubelet[2205]: I0417 23:38:05.721728 2205 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:38:05.725829 kubelet[2205]: I0417 23:38:05.725699 2205 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:38:05.725829 kubelet[2205]: I0417 23:38:05.725758 2205 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:38:05.726834 kubelet[2205]: I0417 23:38:05.725990 2205 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:38:05.726834 kubelet[2205]: E0417 23:38:05.726076 2205 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:38:05.728821 kubelet[2205]: I0417 23:38:05.728641 2205 policy_none.go:44] "Start" Apr 17 23:38:05.740600 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:38:05.751200 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:38:05.771113 kubelet[2205]: E0417 23:38:05.771067 2205 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:05.771151 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:38:05.775499 kubelet[2205]: E0417 23:38:05.773943 2205 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:38:05.775499 kubelet[2205]: I0417 23:38:05.774802 2205 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:38:05.775499 kubelet[2205]: I0417 23:38:05.774821 2205 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:38:05.775499 kubelet[2205]: I0417 23:38:05.775354 2205 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:38:05.777079 kubelet[2205]: E0417 23:38:05.777046 2205 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:38:05.777166 kubelet[2205]: E0417 23:38:05.777104 2205 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:05.850482 systemd[1]: Created slice kubepods-burstable-pod20ca86b578c6fff6251f478ebfe54e14.slice - libcontainer container kubepods-burstable-pod20ca86b578c6fff6251f478ebfe54e14.slice. Apr 17 23:38:05.864775 kubelet[2205]: E0417 23:38:05.864714 2205 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.869115 systemd[1]: Created slice kubepods-burstable-pod00e85524808dd49b8c972ff2dab20f20.slice - libcontainer container kubepods-burstable-pod00e85524808dd49b8c972ff2dab20f20.slice. Apr 17 23:38:05.877384 kubelet[2205]: E0417 23:38:05.877352 2205 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.877642 kubelet[2205]: E0417 23:38:05.877340 2205 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1?timeout=10s\": dial tcp 10.128.0.110:6443: connect: connection refused" interval="400ms" Apr 17 23:38:05.881544 kubelet[2205]: I0417 23:38:05.881501 2205 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.882132 kubelet[2205]: E0417 23:38:05.882060 2205 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.110:6443/api/v1/nodes\": dial tcp 10.128.0.110:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.884217 systemd[1]: Created slice kubepods-burstable-podddeb3a55f5dd4ae4bbb71a087b1dfe42.slice - libcontainer container kubepods-burstable-podddeb3a55f5dd4ae4bbb71a087b1dfe42.slice. Apr 17 23:38:05.886825 kubelet[2205]: E0417 23:38:05.886789 2205 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.975734 kubelet[2205]: I0417 23:38:05.975620 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ddeb3a55f5dd4ae4bbb71a087b1dfe42-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"ddeb3a55f5dd4ae4bbb71a087b1dfe42\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.976008 kubelet[2205]: I0417 23:38:05.975746 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20ca86b578c6fff6251f478ebfe54e14-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"20ca86b578c6fff6251f478ebfe54e14\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.976008 kubelet[2205]: I0417 23:38:05.975814 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20ca86b578c6fff6251f478ebfe54e14-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"20ca86b578c6fff6251f478ebfe54e14\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.976008 kubelet[2205]: I0417 23:38:05.975844 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20ca86b578c6fff6251f478ebfe54e14-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"20ca86b578c6fff6251f478ebfe54e14\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.976008 kubelet[2205]: I0417 23:38:05.975903 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.976252 kubelet[2205]: I0417 23:38:05.975983 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.976252 kubelet[2205]: I0417 23:38:05.976042 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.976252 kubelet[2205]: I0417 23:38:05.976096 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:05.976252 kubelet[2205]: I0417 23:38:05.976126 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:06.088478 kubelet[2205]: I0417 23:38:06.088414 2205 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:06.088884 kubelet[2205]: E0417 23:38:06.088839 2205 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.110:6443/api/v1/nodes\": dial tcp 10.128.0.110:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:06.170218 containerd[1457]: time="2026-04-17T23:38:06.170128879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1,Uid:20ca86b578c6fff6251f478ebfe54e14,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:06.187206 containerd[1457]: time="2026-04-17T23:38:06.186769940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1,Uid:00e85524808dd49b8c972ff2dab20f20,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:06.193616 containerd[1457]: time="2026-04-17T23:38:06.193563616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1,Uid:ddeb3a55f5dd4ae4bbb71a087b1dfe42,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:06.278658 kubelet[2205]: E0417 23:38:06.278590 2205 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1?timeout=10s\": dial tcp 10.128.0.110:6443: connect: connection refused" interval="800ms" Apr 17 23:38:06.493272 kubelet[2205]: I0417 23:38:06.493129 2205 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:06.493979 kubelet[2205]: E0417 23:38:06.493919 2205 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.110:6443/api/v1/nodes\": dial tcp 10.128.0.110:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:06.605993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3616803422.mount: Deactivated successfully. Apr 17 23:38:06.617499 containerd[1457]: time="2026-04-17T23:38:06.617359314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:06.618809 containerd[1457]: time="2026-04-17T23:38:06.618752106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:06.620253 containerd[1457]: time="2026-04-17T23:38:06.620189622Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312146" Apr 17 23:38:06.621440 containerd[1457]: time="2026-04-17T23:38:06.621377048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:38:06.623260 containerd[1457]: time="2026-04-17T23:38:06.622577991Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:06.624316 containerd[1457]: time="2026-04-17T23:38:06.624227232Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:06.625528 containerd[1457]: time="2026-04-17T23:38:06.625342747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:38:06.628281 containerd[1457]: time="2026-04-17T23:38:06.628216963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:06.630696 containerd[1457]: time="2026-04-17T23:38:06.630425454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 436.753807ms" Apr 17 23:38:06.633209 containerd[1457]: time="2026-04-17T23:38:06.633169827Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 462.915349ms" Apr 17 23:38:06.638500 containerd[1457]: time="2026-04-17T23:38:06.638142908Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 451.263426ms" Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.848397101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.848483681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.848527421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.848065210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.848202270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.848241183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.848401354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.848644008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.848884457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.849027641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.849109189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:06.849701 containerd[1457]: time="2026-04-17T23:38:06.849293689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:06.893676 systemd[1]: Started cri-containerd-c5d19365a1cce64524f5914fce223382e02c7e05e07ba16d286de19bbe195d93.scope - libcontainer container c5d19365a1cce64524f5914fce223382e02c7e05e07ba16d286de19bbe195d93. Apr 17 23:38:06.905630 systemd[1]: Started cri-containerd-7ea164c3af2f6ef5ecf0231162f829b11b4962b96526ae4d31e8d25d2934880a.scope - libcontainer container 7ea164c3af2f6ef5ecf0231162f829b11b4962b96526ae4d31e8d25d2934880a. Apr 17 23:38:06.926713 systemd[1]: Started cri-containerd-bf2c4c846d245d527a554d1839edaf4017722455492f7792deba0ef7ef5506c1.scope - libcontainer container bf2c4c846d245d527a554d1839edaf4017722455492f7792deba0ef7ef5506c1. Apr 17 23:38:06.996688 containerd[1457]: time="2026-04-17T23:38:06.996619416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1,Uid:00e85524808dd49b8c972ff2dab20f20,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ea164c3af2f6ef5ecf0231162f829b11b4962b96526ae4d31e8d25d2934880a\"" Apr 17 23:38:07.004475 kubelet[2205]: E0417 23:38:07.002368 2205 kubelet_pods.go:562] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f" Apr 17 23:38:07.012096 containerd[1457]: time="2026-04-17T23:38:07.011875351Z" level=info msg="CreateContainer within sandbox \"7ea164c3af2f6ef5ecf0231162f829b11b4962b96526ae4d31e8d25d2934880a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:38:07.043858 containerd[1457]: time="2026-04-17T23:38:07.043654900Z" level=info msg="CreateContainer within sandbox \"7ea164c3af2f6ef5ecf0231162f829b11b4962b96526ae4d31e8d25d2934880a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a4c8c86491bbc777dad7db0f918cec555bf2e12670971c81537c2547fbfb148c\"" Apr 17 23:38:07.046332 containerd[1457]: time="2026-04-17T23:38:07.046060221Z" level=info msg="StartContainer for \"a4c8c86491bbc777dad7db0f918cec555bf2e12670971c81537c2547fbfb148c\"" Apr 17 23:38:07.055775 containerd[1457]: time="2026-04-17T23:38:07.053213500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1,Uid:20ca86b578c6fff6251f478ebfe54e14,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf2c4c846d245d527a554d1839edaf4017722455492f7792deba0ef7ef5506c1\"" Apr 17 23:38:07.059112 containerd[1457]: time="2026-04-17T23:38:07.059059708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1,Uid:ddeb3a55f5dd4ae4bbb71a087b1dfe42,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5d19365a1cce64524f5914fce223382e02c7e05e07ba16d286de19bbe195d93\"" Apr 17 23:38:07.062061 kubelet[2205]: E0417 23:38:07.061604 2205 kubelet_pods.go:562] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa" Apr 17 23:38:07.065467 kubelet[2205]: E0417 23:38:07.065382 2205 kubelet_pods.go:562] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa" Apr 17 23:38:07.066367 containerd[1457]: time="2026-04-17T23:38:07.066324959Z" level=info msg="CreateContainer within sandbox \"bf2c4c846d245d527a554d1839edaf4017722455492f7792deba0ef7ef5506c1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:38:07.071560 containerd[1457]: time="2026-04-17T23:38:07.071207539Z" level=info msg="CreateContainer within sandbox \"c5d19365a1cce64524f5914fce223382e02c7e05e07ba16d286de19bbe195d93\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:38:07.079197 kubelet[2205]: E0417 23:38:07.079147 2205 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1?timeout=10s\": dial tcp 10.128.0.110:6443: connect: connection refused" interval="1.6s" Apr 17 23:38:07.096300 containerd[1457]: time="2026-04-17T23:38:07.096108754Z" level=info msg="CreateContainer within sandbox \"bf2c4c846d245d527a554d1839edaf4017722455492f7792deba0ef7ef5506c1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"86149f1037c2a9ff6428bbe7b9f61228904664f7d13250f7844fa67d212171d3\"" Apr 17 23:38:07.097395 containerd[1457]: time="2026-04-17T23:38:07.097353319Z" level=info msg="CreateContainer within sandbox \"c5d19365a1cce64524f5914fce223382e02c7e05e07ba16d286de19bbe195d93\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d5e260823f0026a6c974ea448e067099b8cedf8d121aeca35186f232b28e7995\"" Apr 17 23:38:07.098217 containerd[1457]: time="2026-04-17T23:38:07.098076147Z" level=info msg="StartContainer for \"86149f1037c2a9ff6428bbe7b9f61228904664f7d13250f7844fa67d212171d3\"" Apr 17 23:38:07.098416 containerd[1457]: time="2026-04-17T23:38:07.098346559Z" level=info msg="StartContainer for \"d5e260823f0026a6c974ea448e067099b8cedf8d121aeca35186f232b28e7995\"" Apr 17 23:38:07.118522 systemd[1]: Started cri-containerd-a4c8c86491bbc777dad7db0f918cec555bf2e12670971c81537c2547fbfb148c.scope - libcontainer container a4c8c86491bbc777dad7db0f918cec555bf2e12670971c81537c2547fbfb148c. Apr 17 23:38:07.177905 systemd[1]: Started cri-containerd-86149f1037c2a9ff6428bbe7b9f61228904664f7d13250f7844fa67d212171d3.scope - libcontainer container 86149f1037c2a9ff6428bbe7b9f61228904664f7d13250f7844fa67d212171d3. Apr 17 23:38:07.198693 systemd[1]: Started cri-containerd-d5e260823f0026a6c974ea448e067099b8cedf8d121aeca35186f232b28e7995.scope - libcontainer container d5e260823f0026a6c974ea448e067099b8cedf8d121aeca35186f232b28e7995. Apr 17 23:38:07.226644 containerd[1457]: time="2026-04-17T23:38:07.226588492Z" level=info msg="StartContainer for \"a4c8c86491bbc777dad7db0f918cec555bf2e12670971c81537c2547fbfb148c\" returns successfully" Apr 17 23:38:07.288114 containerd[1457]: time="2026-04-17T23:38:07.288059386Z" level=info msg="StartContainer for \"86149f1037c2a9ff6428bbe7b9f61228904664f7d13250f7844fa67d212171d3\" returns successfully" Apr 17 23:38:07.301231 kubelet[2205]: I0417 23:38:07.300721 2205 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:07.301231 kubelet[2205]: E0417 23:38:07.301163 2205 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.110:6443/api/v1/nodes\": dial tcp 10.128.0.110:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:07.408218 containerd[1457]: time="2026-04-17T23:38:07.408041412Z" level=info msg="StartContainer for \"d5e260823f0026a6c974ea448e067099b8cedf8d121aeca35186f232b28e7995\" returns successfully" Apr 17 23:38:07.746588 kubelet[2205]: E0417 23:38:07.746311 2205 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:07.746935 kubelet[2205]: E0417 23:38:07.746845 2205 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:07.755039 kubelet[2205]: E0417 23:38:07.754822 2205 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:08.762079 kubelet[2205]: E0417 23:38:08.762020 2205 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:08.763601 kubelet[2205]: E0417 23:38:08.763563 2205 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:08.906170 kubelet[2205]: I0417 23:38:08.906121 2205 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:09.187617 kubelet[2205]: E0417 23:38:09.187550 2205 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:09.359483 kubelet[2205]: I0417 23:38:09.357422 2205 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:09.359483 kubelet[2205]: E0417 23:38:09.357499 2205 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\": node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:09.377184 kubelet[2205]: E0417 23:38:09.376729 2205 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:09.478110 kubelet[2205]: E0417 23:38:09.477650 2205 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:09.578882 kubelet[2205]: E0417 23:38:09.578821 2205 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:09.679475 kubelet[2205]: E0417 23:38:09.679377 2205 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:09.780313 kubelet[2205]: E0417 23:38:09.780151 2205 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:09.880408 kubelet[2205]: E0417 23:38:09.880334 2205 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" not found" Apr 17 23:38:09.973651 kubelet[2205]: I0417 23:38:09.973556 2205 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:09.980294 kubelet[2205]: E0417 23:38:09.980237 2205 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:09.980294 kubelet[2205]: I0417 23:38:09.980279 2205 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:09.983537 kubelet[2205]: E0417 23:38:09.983476 2205 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:09.983537 kubelet[2205]: I0417 23:38:09.983520 2205 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:09.986433 kubelet[2205]: E0417 23:38:09.986378 2205 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:10.529757 kubelet[2205]: I0417 23:38:10.529532 2205 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:10.536075 kubelet[2205]: I0417 23:38:10.536038 2205 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 17 23:38:10.627600 kubelet[2205]: I0417 23:38:10.627540 2205 apiserver.go:52] "Watching apiserver" Apr 17 23:38:10.675254 kubelet[2205]: I0417 23:38:10.675190 2205 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:38:11.204890 systemd[1]: Reloading requested from client PID 2490 ('systemctl') (unit session-7.scope)... Apr 17 23:38:11.204913 systemd[1]: Reloading... Apr 17 23:38:11.334533 zram_generator::config[2526]: No configuration found. Apr 17 23:38:11.485467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:38:11.614525 systemd[1]: Reloading finished in 408 ms. Apr 17 23:38:11.672886 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:11.685023 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:38:11.685507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:11.691894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:12.009759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:12.020132 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:38:12.100771 kubelet[2578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:38:12.109200 kubelet[2578]: I0417 23:38:12.109132 2578 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:38:12.109200 kubelet[2578]: I0417 23:38:12.109175 2578 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:38:12.109200 kubelet[2578]: I0417 23:38:12.109188 2578 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:38:12.109200 kubelet[2578]: I0417 23:38:12.109196 2578 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:38:12.109673 kubelet[2578]: I0417 23:38:12.109638 2578 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:38:12.111073 kubelet[2578]: I0417 23:38:12.111036 2578 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:38:12.113801 kubelet[2578]: I0417 23:38:12.113626 2578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:38:12.120120 kubelet[2578]: E0417 23:38:12.119828 2578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:38:12.120120 kubelet[2578]: I0417 23:38:12.119908 2578 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:38:12.124488 kubelet[2578]: I0417 23:38:12.124434 2578 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:38:12.124842 kubelet[2578]: I0417 23:38:12.124796 2578 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:38:12.125089 kubelet[2578]: I0417 23:38:12.124828 2578 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:38:12.125089 kubelet[2578]: I0417 23:38:12.125088 2578 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:38:12.125308 kubelet[2578]: I0417 23:38:12.125107 2578 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:38:12.125308 kubelet[2578]: I0417 23:38:12.125142 2578 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:38:12.125510 kubelet[2578]: I0417 23:38:12.125489 2578 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:38:12.125922 kubelet[2578]: I0417 23:38:12.125885 2578 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:38:12.125922 kubelet[2578]: I0417 23:38:12.125919 2578 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:38:12.126083 kubelet[2578]: I0417 23:38:12.125944 2578 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:38:12.126083 kubelet[2578]: I0417 23:38:12.125958 2578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:38:12.131524 kubelet[2578]: I0417 23:38:12.131056 2578 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:38:12.132408 kubelet[2578]: I0417 23:38:12.132326 2578 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:38:12.132588 kubelet[2578]: I0417 23:38:12.132432 2578 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:38:12.165785 kubelet[2578]: I0417 23:38:12.165751 2578 server.go:1257] "Started kubelet" Apr 17 23:38:12.167756 kubelet[2578]: I0417 23:38:12.166963 2578 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:38:12.167756 kubelet[2578]: I0417 23:38:12.167545 2578 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:38:12.169608 kubelet[2578]: I0417 23:38:12.168715 2578 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:38:12.169608 kubelet[2578]: I0417 23:38:12.168815 2578 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:38:12.171161 kubelet[2578]: I0417 23:38:12.171139 2578 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:38:12.172512 kubelet[2578]: I0417 23:38:12.171146 2578 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:38:12.175510 kubelet[2578]: I0417 23:38:12.175421 2578 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:38:12.177476 kubelet[2578]: I0417 23:38:12.176030 2578 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:38:12.177476 kubelet[2578]: I0417 23:38:12.176191 2578 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:38:12.184218 kubelet[2578]: I0417 23:38:12.184175 2578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:38:12.186843 kubelet[2578]: I0417 23:38:12.186805 2578 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:38:12.187057 kubelet[2578]: I0417 23:38:12.186952 2578 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:38:12.207635 kubelet[2578]: I0417 23:38:12.207603 2578 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:38:12.228013 kubelet[2578]: I0417 23:38:12.227924 2578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:38:12.233915 kubelet[2578]: I0417 23:38:12.231327 2578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:38:12.233915 kubelet[2578]: I0417 23:38:12.231357 2578 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:38:12.233915 kubelet[2578]: I0417 23:38:12.231408 2578 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:38:12.233915 kubelet[2578]: E0417 23:38:12.231508 2578 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:38:12.249315 kubelet[2578]: E0417 23:38:12.249275 2578 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:38:12.305563 kubelet[2578]: I0417 23:38:12.304883 2578 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:38:12.305563 kubelet[2578]: I0417 23:38:12.305062 2578 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:38:12.305563 kubelet[2578]: I0417 23:38:12.305115 2578 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:38:12.305563 kubelet[2578]: I0417 23:38:12.305343 2578 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 17 23:38:12.305563 kubelet[2578]: I0417 23:38:12.305359 2578 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 17 23:38:12.305563 kubelet[2578]: I0417 23:38:12.305386 2578 policy_none.go:50] "Start" Apr 17 23:38:12.305563 kubelet[2578]: I0417 23:38:12.305399 2578 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:38:12.305563 kubelet[2578]: I0417 23:38:12.305413 2578 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:38:12.308804 kubelet[2578]: I0417 23:38:12.306928 2578 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:38:12.308804 kubelet[2578]: I0417 23:38:12.306950 2578 policy_none.go:44] "Start" Apr 17 23:38:12.317015 kubelet[2578]: E0417 23:38:12.316965 2578 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:38:12.317246 kubelet[2578]: I0417 23:38:12.317212 2578 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:38:12.317329 kubelet[2578]: I0417 23:38:12.317233 2578 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:38:12.318124 kubelet[2578]: I0417 23:38:12.317987 2578 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:38:12.323061 kubelet[2578]: E0417 23:38:12.323030 2578 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:38:12.333247 kubelet[2578]: I0417 23:38:12.332547 2578 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.333247 kubelet[2578]: I0417 23:38:12.333019 2578 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.333725 kubelet[2578]: I0417 23:38:12.333284 2578 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.350270 kubelet[2578]: I0417 23:38:12.349836 2578 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 17 23:38:12.356905 kubelet[2578]: I0417 23:38:12.355805 2578 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 17 23:38:12.358743 kubelet[2578]: I0417 23:38:12.357986 2578 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 17 23:38:12.358743 kubelet[2578]: E0417 23:38:12.358071 2578 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.439098 kubelet[2578]: I0417 23:38:12.438793 2578 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.450811 kubelet[2578]: I0417 23:38:12.450765 2578 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.451015 kubelet[2578]: I0417 23:38:12.450884 2578 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.477418 kubelet[2578]: I0417 23:38:12.477315 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20ca86b578c6fff6251f478ebfe54e14-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"20ca86b578c6fff6251f478ebfe54e14\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.477918 kubelet[2578]: I0417 23:38:12.477707 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20ca86b578c6fff6251f478ebfe54e14-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"20ca86b578c6fff6251f478ebfe54e14\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.477918 kubelet[2578]: I0417 23:38:12.477793 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.477918 kubelet[2578]: I0417 23:38:12.477828 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.477918 kubelet[2578]: I0417 23:38:12.477885 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.478463 kubelet[2578]: I0417 23:38:12.478218 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20ca86b578c6fff6251f478ebfe54e14-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"20ca86b578c6fff6251f478ebfe54e14\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.478463 kubelet[2578]: I0417 23:38:12.478301 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.478463 kubelet[2578]: I0417 23:38:12.478406 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00e85524808dd49b8c972ff2dab20f20-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"00e85524808dd49b8c972ff2dab20f20\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:12.478772 kubelet[2578]: I0417 23:38:12.478440 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ddeb3a55f5dd4ae4bbb71a087b1dfe42-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" (UID: \"ddeb3a55f5dd4ae4bbb71a087b1dfe42\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:13.127516 kubelet[2578]: I0417 23:38:13.127207 2578 apiserver.go:52] "Watching apiserver" Apr 17 23:38:13.176838 kubelet[2578]: I0417 23:38:13.176740 2578 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:38:13.304789 kubelet[2578]: I0417 23:38:13.304367 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" podStartSLOduration=3.304233872 podStartE2EDuration="3.304233872s" podCreationTimestamp="2026-04-17 23:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:38:13.303766846 +0000 UTC m=+1.275766713" watchObservedRunningTime="2026-04-17 23:38:13.304233872 +0000 UTC m=+1.276233739" Apr 17 23:38:13.331018 kubelet[2578]: I0417 23:38:13.330802 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" podStartSLOduration=1.330778633 podStartE2EDuration="1.330778633s" podCreationTimestamp="2026-04-17 23:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:38:13.316183999 +0000 UTC m=+1.288183864" watchObservedRunningTime="2026-04-17 23:38:13.330778633 +0000 UTC m=+1.302778498" Apr 17 23:38:13.355262 kubelet[2578]: I0417 23:38:13.354804 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" podStartSLOduration=1.354753495 podStartE2EDuration="1.354753495s" podCreationTimestamp="2026-04-17 23:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:38:13.333311931 +0000 UTC m=+1.305311800" watchObservedRunningTime="2026-04-17 23:38:13.354753495 +0000 UTC m=+1.326753344" Apr 17 23:38:17.928054 kubelet[2578]: I0417 23:38:17.927834 2578 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:38:17.929665 containerd[1457]: time="2026-04-17T23:38:17.929109165Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:38:17.930132 kubelet[2578]: I0417 23:38:17.929368 2578 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:38:18.194932 update_engine[1435]: I20260417 23:38:18.194684 1435 update_attempter.cc:509] Updating boot flags... Apr 17 23:38:18.263594 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2639) Apr 17 23:38:18.393297 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2642) Apr 17 23:38:19.071716 systemd[1]: Created slice kubepods-besteffort-poda0dac501_80ad_4ab0_9b2f_d960b85e63b3.slice - libcontainer container kubepods-besteffort-poda0dac501_80ad_4ab0_9b2f_d960b85e63b3.slice. Apr 17 23:38:19.222223 kubelet[2578]: I0417 23:38:19.221869 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a0dac501-80ad-4ab0-9b2f-d960b85e63b3-kube-proxy\") pod \"kube-proxy-8s5kc\" (UID: \"a0dac501-80ad-4ab0-9b2f-d960b85e63b3\") " pod="kube-system/kube-proxy-8s5kc" Apr 17 23:38:19.222223 kubelet[2578]: I0417 23:38:19.222011 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0dac501-80ad-4ab0-9b2f-d960b85e63b3-xtables-lock\") pod \"kube-proxy-8s5kc\" (UID: \"a0dac501-80ad-4ab0-9b2f-d960b85e63b3\") " pod="kube-system/kube-proxy-8s5kc" Apr 17 23:38:19.222223 kubelet[2578]: I0417 23:38:19.222080 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0dac501-80ad-4ab0-9b2f-d960b85e63b3-lib-modules\") pod \"kube-proxy-8s5kc\" (UID: \"a0dac501-80ad-4ab0-9b2f-d960b85e63b3\") " pod="kube-system/kube-proxy-8s5kc" Apr 17 23:38:19.222223 kubelet[2578]: I0417 23:38:19.222116 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78qmb\" (UniqueName: \"kubernetes.io/projected/a0dac501-80ad-4ab0-9b2f-d960b85e63b3-kube-api-access-78qmb\") pod \"kube-proxy-8s5kc\" (UID: \"a0dac501-80ad-4ab0-9b2f-d960b85e63b3\") " pod="kube-system/kube-proxy-8s5kc" Apr 17 23:38:19.257762 systemd[1]: Created slice kubepods-besteffort-pod96d6cce8_7692_4ab4_8273_ca2cc858b4bc.slice - libcontainer container kubepods-besteffort-pod96d6cce8_7692_4ab4_8273_ca2cc858b4bc.slice. Apr 17 23:38:19.386550 containerd[1457]: time="2026-04-17T23:38:19.386399763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8s5kc,Uid:a0dac501-80ad-4ab0-9b2f-d960b85e63b3,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:19.422741 containerd[1457]: time="2026-04-17T23:38:19.422247498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:19.422741 containerd[1457]: time="2026-04-17T23:38:19.422335689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:19.422741 containerd[1457]: time="2026-04-17T23:38:19.422362984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:19.423407 kubelet[2578]: I0417 23:38:19.423192 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8zvs\" (UniqueName: \"kubernetes.io/projected/96d6cce8-7692-4ab4-8273-ca2cc858b4bc-kube-api-access-s8zvs\") pod \"tigera-operator-6cf4cccc57-plq8d\" (UID: \"96d6cce8-7692-4ab4-8273-ca2cc858b4bc\") " pod="tigera-operator/tigera-operator-6cf4cccc57-plq8d" Apr 17 23:38:19.423407 kubelet[2578]: I0417 23:38:19.423256 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/96d6cce8-7692-4ab4-8273-ca2cc858b4bc-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-plq8d\" (UID: \"96d6cce8-7692-4ab4-8273-ca2cc858b4bc\") " pod="tigera-operator/tigera-operator-6cf4cccc57-plq8d" Apr 17 23:38:19.424279 containerd[1457]: time="2026-04-17T23:38:19.424144423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:19.456733 systemd[1]: Started cri-containerd-7a26ba237532993d33ffa9af3564b9329741b3e9d519f3f9b47aa0daf55e6ef4.scope - libcontainer container 7a26ba237532993d33ffa9af3564b9329741b3e9d519f3f9b47aa0daf55e6ef4. Apr 17 23:38:19.487769 containerd[1457]: time="2026-04-17T23:38:19.487529965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8s5kc,Uid:a0dac501-80ad-4ab0-9b2f-d960b85e63b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a26ba237532993d33ffa9af3564b9329741b3e9d519f3f9b47aa0daf55e6ef4\"" Apr 17 23:38:19.496093 containerd[1457]: time="2026-04-17T23:38:19.495949938Z" level=info msg="CreateContainer within sandbox \"7a26ba237532993d33ffa9af3564b9329741b3e9d519f3f9b47aa0daf55e6ef4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:38:19.518205 containerd[1457]: time="2026-04-17T23:38:19.518154265Z" level=info msg="CreateContainer within sandbox \"7a26ba237532993d33ffa9af3564b9329741b3e9d519f3f9b47aa0daf55e6ef4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89bad6a05cb0346f47c6c03216e28f92c3f8c73fb44e0fcec581601ea71cfa7b\"" Apr 17 23:38:19.519194 containerd[1457]: time="2026-04-17T23:38:19.519087305Z" level=info msg="StartContainer for \"89bad6a05cb0346f47c6c03216e28f92c3f8c73fb44e0fcec581601ea71cfa7b\"" Apr 17 23:38:19.562810 systemd[1]: Started cri-containerd-89bad6a05cb0346f47c6c03216e28f92c3f8c73fb44e0fcec581601ea71cfa7b.scope - libcontainer container 89bad6a05cb0346f47c6c03216e28f92c3f8c73fb44e0fcec581601ea71cfa7b. Apr 17 23:38:19.573915 containerd[1457]: time="2026-04-17T23:38:19.573863837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-plq8d,Uid:96d6cce8-7692-4ab4-8273-ca2cc858b4bc,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:38:19.624128 containerd[1457]: time="2026-04-17T23:38:19.623962859Z" level=info msg="StartContainer for \"89bad6a05cb0346f47c6c03216e28f92c3f8c73fb44e0fcec581601ea71cfa7b\" returns successfully" Apr 17 23:38:19.650672 containerd[1457]: time="2026-04-17T23:38:19.649879628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:19.650672 containerd[1457]: time="2026-04-17T23:38:19.649962298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:19.650672 containerd[1457]: time="2026-04-17T23:38:19.650013345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:19.652517 containerd[1457]: time="2026-04-17T23:38:19.651175542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:19.681730 systemd[1]: Started cri-containerd-6a1cbc6a2b526906f9dc898f84ee50fb09703a33437429e0e883f9bd9f29e677.scope - libcontainer container 6a1cbc6a2b526906f9dc898f84ee50fb09703a33437429e0e883f9bd9f29e677. Apr 17 23:38:19.759142 containerd[1457]: time="2026-04-17T23:38:19.759090369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-plq8d,Uid:96d6cce8-7692-4ab4-8273-ca2cc858b4bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6a1cbc6a2b526906f9dc898f84ee50fb09703a33437429e0e883f9bd9f29e677\"" Apr 17 23:38:19.764604 containerd[1457]: time="2026-04-17T23:38:19.763436404Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:38:20.313279 kubelet[2578]: I0417 23:38:20.312953 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-8s5kc" podStartSLOduration=1.3128716919999999 podStartE2EDuration="1.312871692s" podCreationTimestamp="2026-04-17 23:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:38:20.312575943 +0000 UTC m=+8.284575809" watchObservedRunningTime="2026-04-17 23:38:20.312871692 +0000 UTC m=+8.284871558" Apr 17 23:38:20.847388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738466843.mount: Deactivated successfully. Apr 17 23:38:22.393993 containerd[1457]: time="2026-04-17T23:38:22.393933911Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:22.397500 containerd[1457]: time="2026-04-17T23:38:22.396733601Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:38:22.398237 containerd[1457]: time="2026-04-17T23:38:22.398192991Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:22.401408 containerd[1457]: time="2026-04-17T23:38:22.401364328Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:22.402564 containerd[1457]: time="2026-04-17T23:38:22.402519286Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.639009734s" Apr 17 23:38:22.402678 containerd[1457]: time="2026-04-17T23:38:22.402570663Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:38:22.408786 containerd[1457]: time="2026-04-17T23:38:22.408732139Z" level=info msg="CreateContainer within sandbox \"6a1cbc6a2b526906f9dc898f84ee50fb09703a33437429e0e883f9bd9f29e677\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:38:22.434063 containerd[1457]: time="2026-04-17T23:38:22.433989836Z" level=info msg="CreateContainer within sandbox \"6a1cbc6a2b526906f9dc898f84ee50fb09703a33437429e0e883f9bd9f29e677\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9\"" Apr 17 23:38:22.436341 containerd[1457]: time="2026-04-17T23:38:22.435062301Z" level=info msg="StartContainer for \"f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9\"" Apr 17 23:38:22.489765 systemd[1]: Started cri-containerd-f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9.scope - libcontainer container f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9. Apr 17 23:38:22.528678 containerd[1457]: time="2026-04-17T23:38:22.527553631Z" level=info msg="StartContainer for \"f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9\" returns successfully" Apr 17 23:38:26.225024 systemd[1]: cri-containerd-f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9.scope: Deactivated successfully. Apr 17 23:38:26.287301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9-rootfs.mount: Deactivated successfully. Apr 17 23:38:26.775041 containerd[1457]: time="2026-04-17T23:38:26.774896032Z" level=info msg="shim disconnected" id=f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9 namespace=k8s.io Apr 17 23:38:26.775041 containerd[1457]: time="2026-04-17T23:38:26.775040698Z" level=warning msg="cleaning up after shim disconnected" id=f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9 namespace=k8s.io Apr 17 23:38:26.775836 containerd[1457]: time="2026-04-17T23:38:26.775057538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:27.252407 kubelet[2578]: I0417 23:38:27.252311 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-plq8d" podStartSLOduration=5.611084565 podStartE2EDuration="8.252287845s" podCreationTimestamp="2026-04-17 23:38:19 +0000 UTC" firstStartedPulling="2026-04-17 23:38:19.762709771 +0000 UTC m=+7.734709612" lastFinishedPulling="2026-04-17 23:38:22.403913033 +0000 UTC m=+10.375912892" observedRunningTime="2026-04-17 23:38:23.3221166 +0000 UTC m=+11.294116467" watchObservedRunningTime="2026-04-17 23:38:27.252287845 +0000 UTC m=+15.224287711" Apr 17 23:38:27.334853 kubelet[2578]: I0417 23:38:27.334418 2578 scope.go:122] "RemoveContainer" containerID="f96a3e380607a7e965e1a1cc863ba74a85dbec0cfe5b52fc2f2072b18b0c08b9" Apr 17 23:38:27.339488 containerd[1457]: time="2026-04-17T23:38:27.339416488Z" level=info msg="CreateContainer within sandbox \"6a1cbc6a2b526906f9dc898f84ee50fb09703a33437429e0e883f9bd9f29e677\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:38:27.359302 containerd[1457]: time="2026-04-17T23:38:27.359214858Z" level=info msg="CreateContainer within sandbox \"6a1cbc6a2b526906f9dc898f84ee50fb09703a33437429e0e883f9bd9f29e677\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"850564c956b72913b8efb574847c83d73311873886333063f4c837868be7b9fc\"" Apr 17 23:38:27.360950 containerd[1457]: time="2026-04-17T23:38:27.360903997Z" level=info msg="StartContainer for \"850564c956b72913b8efb574847c83d73311873886333063f4c837868be7b9fc\"" Apr 17 23:38:27.437880 systemd[1]: run-containerd-runc-k8s.io-850564c956b72913b8efb574847c83d73311873886333063f4c837868be7b9fc-runc.FIRQDH.mount: Deactivated successfully. Apr 17 23:38:27.451731 systemd[1]: Started cri-containerd-850564c956b72913b8efb574847c83d73311873886333063f4c837868be7b9fc.scope - libcontainer container 850564c956b72913b8efb574847c83d73311873886333063f4c837868be7b9fc. Apr 17 23:38:27.632225 containerd[1457]: time="2026-04-17T23:38:27.632085952Z" level=info msg="StartContainer for \"850564c956b72913b8efb574847c83d73311873886333063f4c837868be7b9fc\" returns successfully" Apr 17 23:38:29.782488 sudo[1719]: pam_unix(sudo:session): session closed for user root Apr 17 23:38:29.889981 sshd[1716]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:29.894858 systemd[1]: sshd@6-10.128.0.110:22-50.85.169.122:53086.service: Deactivated successfully. Apr 17 23:38:29.898130 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:38:29.898693 systemd[1]: session-7.scope: Consumed 5.042s CPU time, 158.2M memory peak, 0B memory swap peak. Apr 17 23:38:29.900783 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:38:29.902369 systemd-logind[1428]: Removed session 7. Apr 17 23:38:35.616921 systemd[1]: Created slice kubepods-besteffort-podc5179dbf_3828_4a09_8403_46c39a0fa60a.slice - libcontainer container kubepods-besteffort-podc5179dbf_3828_4a09_8403_46c39a0fa60a.slice. Apr 17 23:38:35.741235 kubelet[2578]: I0417 23:38:35.741004 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c5179dbf-3828-4a09-8403-46c39a0fa60a-typha-certs\") pod \"calico-typha-7685585854-kwm48\" (UID: \"c5179dbf-3828-4a09-8403-46c39a0fa60a\") " pod="calico-system/calico-typha-7685585854-kwm48" Apr 17 23:38:35.741235 kubelet[2578]: I0417 23:38:35.741064 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9n9q\" (UniqueName: \"kubernetes.io/projected/c5179dbf-3828-4a09-8403-46c39a0fa60a-kube-api-access-k9n9q\") pod \"calico-typha-7685585854-kwm48\" (UID: \"c5179dbf-3828-4a09-8403-46c39a0fa60a\") " pod="calico-system/calico-typha-7685585854-kwm48" Apr 17 23:38:35.741235 kubelet[2578]: I0417 23:38:35.741097 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5179dbf-3828-4a09-8403-46c39a0fa60a-tigera-ca-bundle\") pod \"calico-typha-7685585854-kwm48\" (UID: \"c5179dbf-3828-4a09-8403-46c39a0fa60a\") " pod="calico-system/calico-typha-7685585854-kwm48" Apr 17 23:38:35.928268 containerd[1457]: time="2026-04-17T23:38:35.928205419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7685585854-kwm48,Uid:c5179dbf-3828-4a09-8403-46c39a0fa60a,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:35.964907 systemd[1]: Created slice kubepods-besteffort-pod8da04557_fabf_4d5c_87e7_3a298db52751.slice - libcontainer container kubepods-besteffort-pod8da04557_fabf_4d5c_87e7_3a298db52751.slice. Apr 17 23:38:35.992603 containerd[1457]: time="2026-04-17T23:38:35.992367684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:35.994652 containerd[1457]: time="2026-04-17T23:38:35.993019529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:35.994889 containerd[1457]: time="2026-04-17T23:38:35.994535612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:35.994889 containerd[1457]: time="2026-04-17T23:38:35.994776977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:36.041758 kubelet[2578]: I0417 23:38:36.041705 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-cni-net-dir\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.041758 kubelet[2578]: I0417 23:38:36.041758 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-policysync\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.041978 kubelet[2578]: I0417 23:38:36.041786 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-cni-bin-dir\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.041978 kubelet[2578]: I0417 23:38:36.041809 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-cni-log-dir\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.041978 kubelet[2578]: I0417 23:38:36.041835 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-bpffs\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.041978 kubelet[2578]: I0417 23:38:36.041858 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-nodeproc\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.041978 kubelet[2578]: I0417 23:38:36.041886 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-var-lib-calico\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.042235 kubelet[2578]: I0417 23:38:36.041911 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-var-run-calico\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.042235 kubelet[2578]: I0417 23:38:36.041936 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-lib-modules\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.042235 kubelet[2578]: I0417 23:38:36.042003 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-xtables-lock\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.042235 kubelet[2578]: I0417 23:38:36.042037 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn7x6\" (UniqueName: \"kubernetes.io/projected/8da04557-fabf-4d5c-87e7-3a298db52751-kube-api-access-mn7x6\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.042235 kubelet[2578]: I0417 23:38:36.042070 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8da04557-fabf-4d5c-87e7-3a298db52751-tigera-ca-bundle\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.042520 kubelet[2578]: I0417 23:38:36.042101 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-flexvol-driver-host\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.042520 kubelet[2578]: I0417 23:38:36.042127 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8da04557-fabf-4d5c-87e7-3a298db52751-node-certs\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.042520 kubelet[2578]: I0417 23:38:36.042153 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8da04557-fabf-4d5c-87e7-3a298db52751-sys-fs\") pod \"calico-node-6fk2w\" (UID: \"8da04557-fabf-4d5c-87e7-3a298db52751\") " pod="calico-system/calico-node-6fk2w" Apr 17 23:38:36.046716 systemd[1]: Started cri-containerd-0148b6f6702fd3a2e8a13ff16c12b9d4835b27ca2739b4de59819bb648ab040d.scope - libcontainer container 0148b6f6702fd3a2e8a13ff16c12b9d4835b27ca2739b4de59819bb648ab040d. Apr 17 23:38:36.079663 kubelet[2578]: E0417 23:38:36.079600 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:36.160279 kubelet[2578]: E0417 23:38:36.158095 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.160279 kubelet[2578]: W0417 23:38:36.158125 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.160279 kubelet[2578]: E0417 23:38:36.158158 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.161213 kubelet[2578]: E0417 23:38:36.161034 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.161213 kubelet[2578]: W0417 23:38:36.161056 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.161213 kubelet[2578]: E0417 23:38:36.161081 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.165849 kubelet[2578]: E0417 23:38:36.165343 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.165849 kubelet[2578]: W0417 23:38:36.165377 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.165849 kubelet[2578]: E0417 23:38:36.165404 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.167624 kubelet[2578]: E0417 23:38:36.167272 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.167624 kubelet[2578]: W0417 23:38:36.167297 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.167624 kubelet[2578]: E0417 23:38:36.167322 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.167967 kubelet[2578]: E0417 23:38:36.167948 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.168072 kubelet[2578]: W0417 23:38:36.168052 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.168172 kubelet[2578]: E0417 23:38:36.168154 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.169362 kubelet[2578]: E0417 23:38:36.169186 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.169362 kubelet[2578]: W0417 23:38:36.169205 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.169362 kubelet[2578]: E0417 23:38:36.169225 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.171553 kubelet[2578]: E0417 23:38:36.171327 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.171553 kubelet[2578]: W0417 23:38:36.171347 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.171553 kubelet[2578]: E0417 23:38:36.171367 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.172858 kubelet[2578]: E0417 23:38:36.172503 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.172858 kubelet[2578]: W0417 23:38:36.172527 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.172858 kubelet[2578]: E0417 23:38:36.172546 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.175278 kubelet[2578]: E0417 23:38:36.173635 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.175278 kubelet[2578]: W0417 23:38:36.173655 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.175278 kubelet[2578]: E0417 23:38:36.173675 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.175616 kubelet[2578]: E0417 23:38:36.175598 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.175742 kubelet[2578]: W0417 23:38:36.175699 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.175742 kubelet[2578]: E0417 23:38:36.175723 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.176312 kubelet[2578]: E0417 23:38:36.176238 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.176438 kubelet[2578]: W0417 23:38:36.176420 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.176678 kubelet[2578]: E0417 23:38:36.176556 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.178372 kubelet[2578]: E0417 23:38:36.177855 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.178372 kubelet[2578]: W0417 23:38:36.177876 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.178372 kubelet[2578]: E0417 23:38:36.177896 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.180500 kubelet[2578]: E0417 23:38:36.179653 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.180500 kubelet[2578]: W0417 23:38:36.179673 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.180500 kubelet[2578]: E0417 23:38:36.179692 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.183205 kubelet[2578]: E0417 23:38:36.182992 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.183205 kubelet[2578]: W0417 23:38:36.183014 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.183205 kubelet[2578]: E0417 23:38:36.183034 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.185189 kubelet[2578]: E0417 23:38:36.184970 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.185189 kubelet[2578]: W0417 23:38:36.184989 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.185189 kubelet[2578]: E0417 23:38:36.185008 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.187037 kubelet[2578]: E0417 23:38:36.187018 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.187366 kubelet[2578]: W0417 23:38:36.187206 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.187366 kubelet[2578]: E0417 23:38:36.187233 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.190321 kubelet[2578]: E0417 23:38:36.189743 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.190321 kubelet[2578]: W0417 23:38:36.189764 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.190321 kubelet[2578]: E0417 23:38:36.189784 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.190321 kubelet[2578]: E0417 23:38:36.190241 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.190321 kubelet[2578]: W0417 23:38:36.190285 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.190321 kubelet[2578]: E0417 23:38:36.190304 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.199960 kubelet[2578]: E0417 23:38:36.199918 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.200268 kubelet[2578]: W0417 23:38:36.200234 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.200363 kubelet[2578]: E0417 23:38:36.200310 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.203178 containerd[1457]: time="2026-04-17T23:38:36.203039961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7685585854-kwm48,Uid:c5179dbf-3828-4a09-8403-46c39a0fa60a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0148b6f6702fd3a2e8a13ff16c12b9d4835b27ca2739b4de59819bb648ab040d\"" Apr 17 23:38:36.207346 containerd[1457]: time="2026-04-17T23:38:36.207135610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:38:36.244907 kubelet[2578]: E0417 23:38:36.244626 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.244907 kubelet[2578]: W0417 23:38:36.244658 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.244907 kubelet[2578]: E0417 23:38:36.244685 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.244907 kubelet[2578]: I0417 23:38:36.244727 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/52277689-f4f8-4eb4-acdf-589f30ebdb48-varrun\") pod \"csi-node-driver-zq7p4\" (UID: \"52277689-f4f8-4eb4-acdf-589f30ebdb48\") " pod="calico-system/csi-node-driver-zq7p4" Apr 17 23:38:36.245600 kubelet[2578]: E0417 23:38:36.245549 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.245813 kubelet[2578]: W0417 23:38:36.245784 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.246905 kubelet[2578]: E0417 23:38:36.245911 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.246905 kubelet[2578]: I0417 23:38:36.245956 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/52277689-f4f8-4eb4-acdf-589f30ebdb48-socket-dir\") pod \"csi-node-driver-zq7p4\" (UID: \"52277689-f4f8-4eb4-acdf-589f30ebdb48\") " pod="calico-system/csi-node-driver-zq7p4" Apr 17 23:38:36.246905 kubelet[2578]: E0417 23:38:36.246411 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.246905 kubelet[2578]: W0417 23:38:36.246429 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.246905 kubelet[2578]: E0417 23:38:36.246468 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.246905 kubelet[2578]: I0417 23:38:36.246578 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52277689-f4f8-4eb4-acdf-589f30ebdb48-kubelet-dir\") pod \"csi-node-driver-zq7p4\" (UID: \"52277689-f4f8-4eb4-acdf-589f30ebdb48\") " pod="calico-system/csi-node-driver-zq7p4" Apr 17 23:38:36.247238 kubelet[2578]: E0417 23:38:36.247207 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.247238 kubelet[2578]: W0417 23:38:36.247223 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.247357 kubelet[2578]: E0417 23:38:36.247259 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.247915 kubelet[2578]: E0417 23:38:36.247885 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.247915 kubelet[2578]: W0417 23:38:36.247915 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.248080 kubelet[2578]: E0417 23:38:36.247935 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.248490 kubelet[2578]: E0417 23:38:36.248347 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.248490 kubelet[2578]: W0417 23:38:36.248366 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.248490 kubelet[2578]: E0417 23:38:36.248383 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.248773 kubelet[2578]: I0417 23:38:36.248580 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/52277689-f4f8-4eb4-acdf-589f30ebdb48-registration-dir\") pod \"csi-node-driver-zq7p4\" (UID: \"52277689-f4f8-4eb4-acdf-589f30ebdb48\") " pod="calico-system/csi-node-driver-zq7p4" Apr 17 23:38:36.249539 kubelet[2578]: E0417 23:38:36.249404 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.249539 kubelet[2578]: W0417 23:38:36.249426 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.250101 kubelet[2578]: E0417 23:38:36.249663 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.250531 kubelet[2578]: E0417 23:38:36.250289 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.250531 kubelet[2578]: W0417 23:38:36.250305 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.250531 kubelet[2578]: E0417 23:38:36.250320 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.251130 kubelet[2578]: E0417 23:38:36.250959 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.251130 kubelet[2578]: W0417 23:38:36.250978 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.251130 kubelet[2578]: E0417 23:38:36.250993 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.251346 kubelet[2578]: E0417 23:38:36.251332 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.251398 kubelet[2578]: W0417 23:38:36.251346 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.251398 kubelet[2578]: E0417 23:38:36.251364 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.251778 kubelet[2578]: E0417 23:38:36.251741 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.251778 kubelet[2578]: W0417 23:38:36.251761 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.251778 kubelet[2578]: E0417 23:38:36.251778 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.252316 kubelet[2578]: E0417 23:38:36.252134 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.252316 kubelet[2578]: W0417 23:38:36.252152 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.252316 kubelet[2578]: E0417 23:38:36.252170 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.252652 kubelet[2578]: E0417 23:38:36.252635 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.252741 kubelet[2578]: W0417 23:38:36.252729 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.252797 kubelet[2578]: E0417 23:38:36.252787 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.252884 kubelet[2578]: I0417 23:38:36.252866 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8qc\" (UniqueName: \"kubernetes.io/projected/52277689-f4f8-4eb4-acdf-589f30ebdb48-kube-api-access-wp8qc\") pod \"csi-node-driver-zq7p4\" (UID: \"52277689-f4f8-4eb4-acdf-589f30ebdb48\") " pod="calico-system/csi-node-driver-zq7p4" Apr 17 23:38:36.253319 kubelet[2578]: E0417 23:38:36.253283 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.253319 kubelet[2578]: W0417 23:38:36.253306 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.253455 kubelet[2578]: E0417 23:38:36.253325 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.253781 kubelet[2578]: E0417 23:38:36.253747 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.253781 kubelet[2578]: W0417 23:38:36.253766 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.253897 kubelet[2578]: E0417 23:38:36.253783 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.281472 containerd[1457]: time="2026-04-17T23:38:36.281406281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6fk2w,Uid:8da04557-fabf-4d5c-87e7-3a298db52751,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:36.326349 containerd[1457]: time="2026-04-17T23:38:36.326041754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:36.327443 containerd[1457]: time="2026-04-17T23:38:36.327234929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:36.328344 containerd[1457]: time="2026-04-17T23:38:36.328268195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:36.328810 containerd[1457]: time="2026-04-17T23:38:36.328662902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:36.354009 kubelet[2578]: E0417 23:38:36.353966 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.354009 kubelet[2578]: W0417 23:38:36.353994 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.354223 kubelet[2578]: E0417 23:38:36.354024 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.354873 kubelet[2578]: E0417 23:38:36.354839 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.354873 kubelet[2578]: W0417 23:38:36.354860 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.355529 kubelet[2578]: E0417 23:38:36.354880 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.355529 kubelet[2578]: E0417 23:38:36.355290 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.355529 kubelet[2578]: W0417 23:38:36.355304 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.355529 kubelet[2578]: E0417 23:38:36.355322 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.355953 kubelet[2578]: E0417 23:38:36.355712 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.355953 kubelet[2578]: W0417 23:38:36.355727 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.355953 kubelet[2578]: E0417 23:38:36.355745 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.356655 kubelet[2578]: E0417 23:38:36.356394 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.356655 kubelet[2578]: W0417 23:38:36.356409 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.356655 kubelet[2578]: E0417 23:38:36.356427 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.356833 systemd[1]: Started cri-containerd-c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b.scope - libcontainer container c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b. Apr 17 23:38:36.358532 kubelet[2578]: E0417 23:38:36.358503 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.358532 kubelet[2578]: W0417 23:38:36.358522 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.358810 kubelet[2578]: E0417 23:38:36.358550 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.359637 kubelet[2578]: E0417 23:38:36.359567 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.359637 kubelet[2578]: W0417 23:38:36.359592 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.359637 kubelet[2578]: E0417 23:38:36.359614 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.363519 kubelet[2578]: E0417 23:38:36.363478 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.363519 kubelet[2578]: W0417 23:38:36.363506 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.363688 kubelet[2578]: E0417 23:38:36.363537 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.364201 kubelet[2578]: E0417 23:38:36.363971 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.364201 kubelet[2578]: W0417 23:38:36.363990 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.364201 kubelet[2578]: E0417 23:38:36.364009 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.364531 kubelet[2578]: E0417 23:38:36.364513 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.364805 kubelet[2578]: W0417 23:38:36.364615 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.364805 kubelet[2578]: E0417 23:38:36.364666 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.365072 kubelet[2578]: E0417 23:38:36.365056 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.365191 kubelet[2578]: W0417 23:38:36.365171 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.365323 kubelet[2578]: E0417 23:38:36.365306 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.366044 kubelet[2578]: E0417 23:38:36.365986 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.366044 kubelet[2578]: W0417 23:38:36.366005 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.366044 kubelet[2578]: E0417 23:38:36.366022 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.368027 kubelet[2578]: E0417 23:38:36.367743 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.368027 kubelet[2578]: W0417 23:38:36.367763 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.368027 kubelet[2578]: E0417 23:38:36.367807 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.369743 kubelet[2578]: E0417 23:38:36.369711 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.369921 kubelet[2578]: W0417 23:38:36.369836 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.369921 kubelet[2578]: E0417 23:38:36.369859 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.371178 kubelet[2578]: E0417 23:38:36.370875 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.371178 kubelet[2578]: W0417 23:38:36.370896 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.371178 kubelet[2578]: E0417 23:38:36.370914 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.371757 kubelet[2578]: E0417 23:38:36.371707 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.371943 kubelet[2578]: W0417 23:38:36.371853 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.371943 kubelet[2578]: E0417 23:38:36.371876 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.373005 kubelet[2578]: E0417 23:38:36.372843 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.373005 kubelet[2578]: W0417 23:38:36.372863 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.373005 kubelet[2578]: E0417 23:38:36.372893 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.373821 kubelet[2578]: E0417 23:38:36.373690 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.373821 kubelet[2578]: W0417 23:38:36.373709 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.373821 kubelet[2578]: E0417 23:38:36.373727 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.374586 kubelet[2578]: E0417 23:38:36.374378 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.374586 kubelet[2578]: W0417 23:38:36.374396 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.374586 kubelet[2578]: E0417 23:38:36.374413 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.374945 kubelet[2578]: E0417 23:38:36.374887 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.374945 kubelet[2578]: W0417 23:38:36.374905 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.374945 kubelet[2578]: E0417 23:38:36.374925 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.375883 kubelet[2578]: E0417 23:38:36.375693 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.375883 kubelet[2578]: W0417 23:38:36.375711 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.375883 kubelet[2578]: E0417 23:38:36.375731 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.376876 kubelet[2578]: E0417 23:38:36.376832 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.377087 kubelet[2578]: W0417 23:38:36.376963 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.377087 kubelet[2578]: E0417 23:38:36.376985 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.377926 kubelet[2578]: E0417 23:38:36.377765 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.377926 kubelet[2578]: W0417 23:38:36.377784 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.377926 kubelet[2578]: E0417 23:38:36.377802 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.378920 kubelet[2578]: E0417 23:38:36.378692 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.378920 kubelet[2578]: W0417 23:38:36.378711 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.378920 kubelet[2578]: E0417 23:38:36.378731 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.379666 kubelet[2578]: E0417 23:38:36.379565 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.379666 kubelet[2578]: W0417 23:38:36.379612 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.379666 kubelet[2578]: E0417 23:38:36.379633 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.390837 kubelet[2578]: E0417 23:38:36.390773 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:36.390837 kubelet[2578]: W0417 23:38:36.390837 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:36.391034 kubelet[2578]: E0417 23:38:36.390864 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:36.411134 containerd[1457]: time="2026-04-17T23:38:36.411083034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6fk2w,Uid:8da04557-fabf-4d5c-87e7-3a298db52751,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b\"" Apr 17 23:38:37.183823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744118967.mount: Deactivated successfully. Apr 17 23:38:37.232179 kubelet[2578]: E0417 23:38:37.232105 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:38.344248 containerd[1457]: time="2026-04-17T23:38:38.344180053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:38.345635 containerd[1457]: time="2026-04-17T23:38:38.345559187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:38:38.347050 containerd[1457]: time="2026-04-17T23:38:38.346977739Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:38.350298 containerd[1457]: time="2026-04-17T23:38:38.350216650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:38.352120 containerd[1457]: time="2026-04-17T23:38:38.351239874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.144056697s" Apr 17 23:38:38.352120 containerd[1457]: time="2026-04-17T23:38:38.351286593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:38:38.354065 containerd[1457]: time="2026-04-17T23:38:38.354026392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:38:38.378358 containerd[1457]: time="2026-04-17T23:38:38.378197800Z" level=info msg="CreateContainer within sandbox \"0148b6f6702fd3a2e8a13ff16c12b9d4835b27ca2739b4de59819bb648ab040d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:38:38.398923 containerd[1457]: time="2026-04-17T23:38:38.398801662Z" level=info msg="CreateContainer within sandbox \"0148b6f6702fd3a2e8a13ff16c12b9d4835b27ca2739b4de59819bb648ab040d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5ec6b1e53011beb011a11389dafb4c873b72e26a45f856118fbcdd78f3f81780\"" Apr 17 23:38:38.401491 containerd[1457]: time="2026-04-17T23:38:38.399906668Z" level=info msg="StartContainer for \"5ec6b1e53011beb011a11389dafb4c873b72e26a45f856118fbcdd78f3f81780\"" Apr 17 23:38:38.449737 systemd[1]: Started cri-containerd-5ec6b1e53011beb011a11389dafb4c873b72e26a45f856118fbcdd78f3f81780.scope - libcontainer container 5ec6b1e53011beb011a11389dafb4c873b72e26a45f856118fbcdd78f3f81780. Apr 17 23:38:38.509534 containerd[1457]: time="2026-04-17T23:38:38.509208635Z" level=info msg="StartContainer for \"5ec6b1e53011beb011a11389dafb4c873b72e26a45f856118fbcdd78f3f81780\" returns successfully" Apr 17 23:38:39.231953 kubelet[2578]: E0417 23:38:39.231885 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:39.390285 containerd[1457]: time="2026-04-17T23:38:39.388681736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:39.392601 containerd[1457]: time="2026-04-17T23:38:39.392542949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:38:39.394942 containerd[1457]: time="2026-04-17T23:38:39.394372260Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:39.400563 containerd[1457]: time="2026-04-17T23:38:39.400492609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:39.401967 containerd[1457]: time="2026-04-17T23:38:39.401574684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.047493668s" Apr 17 23:38:39.401967 containerd[1457]: time="2026-04-17T23:38:39.401626096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:38:39.413953 containerd[1457]: time="2026-04-17T23:38:39.413708902Z" level=info msg="CreateContainer within sandbox \"c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:38:39.420115 kubelet[2578]: I0417 23:38:39.419255 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-7685585854-kwm48" podStartSLOduration=2.273209555 podStartE2EDuration="4.41923504s" podCreationTimestamp="2026-04-17 23:38:35 +0000 UTC" firstStartedPulling="2026-04-17 23:38:36.206659368 +0000 UTC m=+24.178659206" lastFinishedPulling="2026-04-17 23:38:38.352684821 +0000 UTC m=+26.324684691" observedRunningTime="2026-04-17 23:38:39.417299525 +0000 UTC m=+27.389299394" watchObservedRunningTime="2026-04-17 23:38:39.41923504 +0000 UTC m=+27.391234907" Apr 17 23:38:39.446583 containerd[1457]: time="2026-04-17T23:38:39.446413790Z" level=info msg="CreateContainer within sandbox \"c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6f7559248c41e7ec8b638680a4f968348d001ea9874c67676d253621cc0db99a\"" Apr 17 23:38:39.447712 containerd[1457]: time="2026-04-17T23:38:39.447639345Z" level=info msg="StartContainer for \"6f7559248c41e7ec8b638680a4f968348d001ea9874c67676d253621cc0db99a\"" Apr 17 23:38:39.484880 kubelet[2578]: E0417 23:38:39.484051 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.488607 kubelet[2578]: W0417 23:38:39.488036 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.488607 kubelet[2578]: E0417 23:38:39.488205 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.491144 kubelet[2578]: E0417 23:38:39.490443 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.491144 kubelet[2578]: W0417 23:38:39.490498 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.491144 kubelet[2578]: E0417 23:38:39.490528 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.491469 kubelet[2578]: E0417 23:38:39.491379 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.491469 kubelet[2578]: W0417 23:38:39.491404 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.491707 kubelet[2578]: E0417 23:38:39.491428 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.493234 kubelet[2578]: E0417 23:38:39.493063 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.493234 kubelet[2578]: W0417 23:38:39.493104 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.493234 kubelet[2578]: E0417 23:38:39.493127 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.499505 kubelet[2578]: E0417 23:38:39.497101 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.499505 kubelet[2578]: W0417 23:38:39.497122 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.499505 kubelet[2578]: E0417 23:38:39.497144 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.499505 kubelet[2578]: E0417 23:38:39.497487 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.499505 kubelet[2578]: W0417 23:38:39.497502 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.499505 kubelet[2578]: E0417 23:38:39.497523 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.502083 kubelet[2578]: E0417 23:38:39.501537 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.502083 kubelet[2578]: W0417 23:38:39.501558 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.502083 kubelet[2578]: E0417 23:38:39.501580 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.502972 kubelet[2578]: E0417 23:38:39.502800 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.502972 kubelet[2578]: W0417 23:38:39.502830 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.502972 kubelet[2578]: E0417 23:38:39.502849 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.503806 kubelet[2578]: E0417 23:38:39.503518 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.503806 kubelet[2578]: W0417 23:38:39.503539 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.503806 kubelet[2578]: E0417 23:38:39.503561 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.504234 kubelet[2578]: E0417 23:38:39.504216 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.504441 kubelet[2578]: W0417 23:38:39.504347 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.504441 kubelet[2578]: E0417 23:38:39.504373 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.505107 kubelet[2578]: E0417 23:38:39.504958 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.505107 kubelet[2578]: W0417 23:38:39.504975 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.505107 kubelet[2578]: E0417 23:38:39.504993 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.505564 kubelet[2578]: E0417 23:38:39.505546 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.505706 kubelet[2578]: W0417 23:38:39.505645 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.505706 kubelet[2578]: E0417 23:38:39.505669 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.506585 kubelet[2578]: E0417 23:38:39.506565 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.506834 kubelet[2578]: W0417 23:38:39.506687 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.506834 kubelet[2578]: E0417 23:38:39.506711 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.507414 kubelet[2578]: E0417 23:38:39.507395 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.507708 kubelet[2578]: W0417 23:38:39.507540 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.507708 kubelet[2578]: E0417 23:38:39.507592 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.509735 kubelet[2578]: E0417 23:38:39.509691 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:39.509735 kubelet[2578]: W0417 23:38:39.509710 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:39.509735 kubelet[2578]: E0417 23:38:39.509729 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:39.512986 systemd[1]: Started cri-containerd-6f7559248c41e7ec8b638680a4f968348d001ea9874c67676d253621cc0db99a.scope - libcontainer container 6f7559248c41e7ec8b638680a4f968348d001ea9874c67676d253621cc0db99a. Apr 17 23:38:39.555604 containerd[1457]: time="2026-04-17T23:38:39.555546579Z" level=info msg="StartContainer for \"6f7559248c41e7ec8b638680a4f968348d001ea9874c67676d253621cc0db99a\" returns successfully" Apr 17 23:38:39.571708 systemd[1]: cri-containerd-6f7559248c41e7ec8b638680a4f968348d001ea9874c67676d253621cc0db99a.scope: Deactivated successfully. Apr 17 23:38:39.615634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f7559248c41e7ec8b638680a4f968348d001ea9874c67676d253621cc0db99a-rootfs.mount: Deactivated successfully. Apr 17 23:38:39.953412 containerd[1457]: time="2026-04-17T23:38:39.953134479Z" level=info msg="shim disconnected" id=6f7559248c41e7ec8b638680a4f968348d001ea9874c67676d253621cc0db99a namespace=k8s.io Apr 17 23:38:39.953412 containerd[1457]: time="2026-04-17T23:38:39.953215251Z" level=warning msg="cleaning up after shim disconnected" id=6f7559248c41e7ec8b638680a4f968348d001ea9874c67676d253621cc0db99a namespace=k8s.io Apr 17 23:38:39.953412 containerd[1457]: time="2026-04-17T23:38:39.953232120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:40.390335 kubelet[2578]: I0417 23:38:40.389984 2578 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:38:40.394271 containerd[1457]: time="2026-04-17T23:38:40.393767988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:38:41.232535 kubelet[2578]: E0417 23:38:41.232162 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:43.233016 kubelet[2578]: E0417 23:38:43.232784 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:44.313092 kubelet[2578]: I0417 23:38:44.313043 2578 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:38:45.232081 kubelet[2578]: E0417 23:38:45.231993 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:47.232298 kubelet[2578]: E0417 23:38:47.232235 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:47.692162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount325856551.mount: Deactivated successfully. Apr 17 23:38:47.723155 containerd[1457]: time="2026-04-17T23:38:47.723077855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:47.724891 containerd[1457]: time="2026-04-17T23:38:47.724740890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:38:47.728037 containerd[1457]: time="2026-04-17T23:38:47.726592126Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:47.730473 containerd[1457]: time="2026-04-17T23:38:47.730400502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:47.731694 containerd[1457]: time="2026-04-17T23:38:47.731649530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.33779219s" Apr 17 23:38:47.731876 containerd[1457]: time="2026-04-17T23:38:47.731832491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:38:47.738563 containerd[1457]: time="2026-04-17T23:38:47.738510696Z" level=info msg="CreateContainer within sandbox \"c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:38:47.766623 containerd[1457]: time="2026-04-17T23:38:47.766434137Z" level=info msg="CreateContainer within sandbox \"c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2ab9f0aace1cabf3b25789d81c9145ab796f9515660584fb3fafd86be595fa9e\"" Apr 17 23:38:47.767827 containerd[1457]: time="2026-04-17T23:38:47.767793734Z" level=info msg="StartContainer for \"2ab9f0aace1cabf3b25789d81c9145ab796f9515660584fb3fafd86be595fa9e\"" Apr 17 23:38:47.827696 systemd[1]: Started cri-containerd-2ab9f0aace1cabf3b25789d81c9145ab796f9515660584fb3fafd86be595fa9e.scope - libcontainer container 2ab9f0aace1cabf3b25789d81c9145ab796f9515660584fb3fafd86be595fa9e. Apr 17 23:38:47.870429 containerd[1457]: time="2026-04-17T23:38:47.870373246Z" level=info msg="StartContainer for \"2ab9f0aace1cabf3b25789d81c9145ab796f9515660584fb3fafd86be595fa9e\" returns successfully" Apr 17 23:38:47.937547 systemd[1]: cri-containerd-2ab9f0aace1cabf3b25789d81c9145ab796f9515660584fb3fafd86be595fa9e.scope: Deactivated successfully. Apr 17 23:38:48.690497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ab9f0aace1cabf3b25789d81c9145ab796f9515660584fb3fafd86be595fa9e-rootfs.mount: Deactivated successfully. Apr 17 23:38:49.232089 kubelet[2578]: E0417 23:38:49.232001 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:49.562088 containerd[1457]: time="2026-04-17T23:38:49.561876898Z" level=info msg="shim disconnected" id=2ab9f0aace1cabf3b25789d81c9145ab796f9515660584fb3fafd86be595fa9e namespace=k8s.io Apr 17 23:38:49.562088 containerd[1457]: time="2026-04-17T23:38:49.561988399Z" level=warning msg="cleaning up after shim disconnected" id=2ab9f0aace1cabf3b25789d81c9145ab796f9515660584fb3fafd86be595fa9e namespace=k8s.io Apr 17 23:38:49.562088 containerd[1457]: time="2026-04-17T23:38:49.562052778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:50.429181 containerd[1457]: time="2026-04-17T23:38:50.429118944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:38:51.232496 kubelet[2578]: E0417 23:38:51.231814 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:53.232538 kubelet[2578]: E0417 23:38:53.232443 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:53.674043 containerd[1457]: time="2026-04-17T23:38:53.673924645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:53.676861 containerd[1457]: time="2026-04-17T23:38:53.676491171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:38:53.681060 containerd[1457]: time="2026-04-17T23:38:53.680922036Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:53.687812 containerd[1457]: time="2026-04-17T23:38:53.687745375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:53.689042 containerd[1457]: time="2026-04-17T23:38:53.688985347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.259804504s" Apr 17 23:38:53.689219 containerd[1457]: time="2026-04-17T23:38:53.689190584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:38:53.696031 containerd[1457]: time="2026-04-17T23:38:53.695978201Z" level=info msg="CreateContainer within sandbox \"c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:38:53.719533 containerd[1457]: time="2026-04-17T23:38:53.719436433Z" level=info msg="CreateContainer within sandbox \"c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2bb15bb3e736663486812b1062010d235a4e28e0bf46063f6456b3c8d7c6282\"" Apr 17 23:38:53.720784 containerd[1457]: time="2026-04-17T23:38:53.720719147Z" level=info msg="StartContainer for \"e2bb15bb3e736663486812b1062010d235a4e28e0bf46063f6456b3c8d7c6282\"" Apr 17 23:38:53.777230 systemd[1]: Started cri-containerd-e2bb15bb3e736663486812b1062010d235a4e28e0bf46063f6456b3c8d7c6282.scope - libcontainer container e2bb15bb3e736663486812b1062010d235a4e28e0bf46063f6456b3c8d7c6282. Apr 17 23:38:53.821613 containerd[1457]: time="2026-04-17T23:38:53.821430419Z" level=info msg="StartContainer for \"e2bb15bb3e736663486812b1062010d235a4e28e0bf46063f6456b3c8d7c6282\" returns successfully" Apr 17 23:38:54.879160 containerd[1457]: time="2026-04-17T23:38:54.879088014Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:38:54.882392 systemd[1]: cri-containerd-e2bb15bb3e736663486812b1062010d235a4e28e0bf46063f6456b3c8d7c6282.scope: Deactivated successfully. Apr 17 23:38:54.914342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2bb15bb3e736663486812b1062010d235a4e28e0bf46063f6456b3c8d7c6282-rootfs.mount: Deactivated successfully. Apr 17 23:38:54.948287 kubelet[2578]: I0417 23:38:54.948073 2578 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 17 23:38:55.261523 systemd[1]: Created slice kubepods-burstable-pod6c9e2a5f_cf47_40c7_aad6_17d5a4bfa72f.slice - libcontainer container kubepods-burstable-pod6c9e2a5f_cf47_40c7_aad6_17d5a4bfa72f.slice. Apr 17 23:38:55.311069 systemd[1]: Created slice kubepods-burstable-pod21f2939a_2dbb_4eca_a507_d3f15555c474.slice - libcontainer container kubepods-burstable-pod21f2939a_2dbb_4eca_a507_d3f15555c474.slice. Apr 17 23:38:55.322966 systemd[1]: Created slice kubepods-besteffort-pod52277689_f4f8_4eb4_acdf_589f30ebdb48.slice - libcontainer container kubepods-besteffort-pod52277689_f4f8_4eb4_acdf_589f30ebdb48.slice. Apr 17 23:38:55.338929 kubelet[2578]: I0417 23:38:55.338847 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wp6l\" (UniqueName: \"kubernetes.io/projected/6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f-kube-api-access-7wp6l\") pod \"coredns-7d764666f9-c6pfx\" (UID: \"6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f\") " pod="kube-system/coredns-7d764666f9-c6pfx" Apr 17 23:38:55.338929 kubelet[2578]: I0417 23:38:55.338911 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21f2939a-2dbb-4eca-a507-d3f15555c474-config-volume\") pod \"coredns-7d764666f9-fkclr\" (UID: \"21f2939a-2dbb-4eca-a507-d3f15555c474\") " pod="kube-system/coredns-7d764666f9-fkclr" Apr 17 23:38:55.476430 kubelet[2578]: I0417 23:38:55.338971 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f-config-volume\") pod \"coredns-7d764666f9-c6pfx\" (UID: \"6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f\") " pod="kube-system/coredns-7d764666f9-c6pfx" Apr 17 23:38:55.476430 kubelet[2578]: I0417 23:38:55.338997 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qns5n\" (UniqueName: \"kubernetes.io/projected/21f2939a-2dbb-4eca-a507-d3f15555c474-kube-api-access-qns5n\") pod \"coredns-7d764666f9-fkclr\" (UID: \"21f2939a-2dbb-4eca-a507-d3f15555c474\") " pod="kube-system/coredns-7d764666f9-fkclr" Apr 17 23:38:55.487558 containerd[1457]: time="2026-04-17T23:38:55.487022108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq7p4,Uid:52277689-f4f8-4eb4-acdf-589f30ebdb48,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:55.494198 containerd[1457]: time="2026-04-17T23:38:55.493583593Z" level=info msg="shim disconnected" id=e2bb15bb3e736663486812b1062010d235a4e28e0bf46063f6456b3c8d7c6282 namespace=k8s.io Apr 17 23:38:55.494198 containerd[1457]: time="2026-04-17T23:38:55.493664341Z" level=warning msg="cleaning up after shim disconnected" id=e2bb15bb3e736663486812b1062010d235a4e28e0bf46063f6456b3c8d7c6282 namespace=k8s.io Apr 17 23:38:55.494198 containerd[1457]: time="2026-04-17T23:38:55.493681224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:55.541370 kubelet[2578]: I0417 23:38:55.541213 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5gtw\" (UniqueName: \"kubernetes.io/projected/1c2398f0-91d9-434e-8477-385776513cc3-kube-api-access-n5gtw\") pod \"goldmane-9f7667bb8-pkztx\" (UID: \"1c2398f0-91d9-434e-8477-385776513cc3\") " pod="calico-system/goldmane-9f7667bb8-pkztx" Apr 17 23:38:55.541898 kubelet[2578]: I0417 23:38:55.541625 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41734535-6436-46fc-9937-84a76aab1f06-tigera-ca-bundle\") pod \"calico-kube-controllers-5f7d986d88-pzk8v\" (UID: \"41734535-6436-46fc-9937-84a76aab1f06\") " pod="calico-system/calico-kube-controllers-5f7d986d88-pzk8v" Apr 17 23:38:55.541898 kubelet[2578]: I0417 23:38:55.541785 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcsdr\" (UniqueName: \"kubernetes.io/projected/7a296da7-7b92-4245-a4f5-9775c1f8a482-kube-api-access-mcsdr\") pod \"calico-apiserver-5664b8d97f-bbdfw\" (UID: \"7a296da7-7b92-4245-a4f5-9775c1f8a482\") " pod="calico-system/calico-apiserver-5664b8d97f-bbdfw" Apr 17 23:38:55.542401 kubelet[2578]: I0417 23:38:55.542201 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c2398f0-91d9-434e-8477-385776513cc3-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-pkztx\" (UID: \"1c2398f0-91d9-434e-8477-385776513cc3\") " pod="calico-system/goldmane-9f7667bb8-pkztx" Apr 17 23:38:55.542401 kubelet[2578]: I0417 23:38:55.542263 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6sw4\" (UniqueName: \"kubernetes.io/projected/41734535-6436-46fc-9937-84a76aab1f06-kube-api-access-p6sw4\") pod \"calico-kube-controllers-5f7d986d88-pzk8v\" (UID: \"41734535-6436-46fc-9937-84a76aab1f06\") " pod="calico-system/calico-kube-controllers-5f7d986d88-pzk8v" Apr 17 23:38:55.542401 kubelet[2578]: I0417 23:38:55.542299 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6plrj\" (UniqueName: \"kubernetes.io/projected/91de5af7-c91f-4e46-b3e9-42f53f3c3734-kube-api-access-6plrj\") pod \"calico-apiserver-5664b8d97f-trlct\" (UID: \"91de5af7-c91f-4e46-b3e9-42f53f3c3734\") " pod="calico-system/calico-apiserver-5664b8d97f-trlct" Apr 17 23:38:55.542906 kubelet[2578]: I0417 23:38:55.542651 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c2398f0-91d9-434e-8477-385776513cc3-config\") pod \"goldmane-9f7667bb8-pkztx\" (UID: \"1c2398f0-91d9-434e-8477-385776513cc3\") " pod="calico-system/goldmane-9f7667bb8-pkztx" Apr 17 23:38:55.542906 kubelet[2578]: I0417 23:38:55.542750 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1c2398f0-91d9-434e-8477-385776513cc3-goldmane-key-pair\") pod \"goldmane-9f7667bb8-pkztx\" (UID: \"1c2398f0-91d9-434e-8477-385776513cc3\") " pod="calico-system/goldmane-9f7667bb8-pkztx" Apr 17 23:38:55.542906 kubelet[2578]: I0417 23:38:55.542798 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a296da7-7b92-4245-a4f5-9775c1f8a482-calico-apiserver-certs\") pod \"calico-apiserver-5664b8d97f-bbdfw\" (UID: \"7a296da7-7b92-4245-a4f5-9775c1f8a482\") " pod="calico-system/calico-apiserver-5664b8d97f-bbdfw" Apr 17 23:38:55.542906 kubelet[2578]: I0417 23:38:55.542862 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91de5af7-c91f-4e46-b3e9-42f53f3c3734-calico-apiserver-certs\") pod \"calico-apiserver-5664b8d97f-trlct\" (UID: \"91de5af7-c91f-4e46-b3e9-42f53f3c3734\") " pod="calico-system/calico-apiserver-5664b8d97f-trlct" Apr 17 23:38:55.566714 systemd[1]: Created slice kubepods-besteffort-pod41734535_6436_46fc_9937_84a76aab1f06.slice - libcontainer container kubepods-besteffort-pod41734535_6436_46fc_9937_84a76aab1f06.slice. Apr 17 23:38:55.596231 systemd[1]: Created slice kubepods-besteffort-pod91de5af7_c91f_4e46_b3e9_42f53f3c3734.slice - libcontainer container kubepods-besteffort-pod91de5af7_c91f_4e46_b3e9_42f53f3c3734.slice. Apr 17 23:38:55.604482 containerd[1457]: time="2026-04-17T23:38:55.602697565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c6pfx,Uid:6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:55.621733 systemd[1]: Created slice kubepods-besteffort-pod7a296da7_7b92_4245_a4f5_9775c1f8a482.slice - libcontainer container kubepods-besteffort-pod7a296da7_7b92_4245_a4f5_9775c1f8a482.slice. Apr 17 23:38:55.626140 containerd[1457]: time="2026-04-17T23:38:55.626088650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fkclr,Uid:21f2939a-2dbb-4eca-a507-d3f15555c474,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:55.645983 systemd[1]: Created slice kubepods-besteffort-pod1c2398f0_91d9_434e_8477_385776513cc3.slice - libcontainer container kubepods-besteffort-pod1c2398f0_91d9_434e_8477_385776513cc3.slice. Apr 17 23:38:55.649196 kubelet[2578]: I0417 23:38:55.647407 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddngr\" (UniqueName: \"kubernetes.io/projected/653de30e-add6-4842-ab39-9c2b0c910fb8-kube-api-access-ddngr\") pod \"whisker-5ccff6d658-s9fnj\" (UID: \"653de30e-add6-4842-ab39-9c2b0c910fb8\") " pod="calico-system/whisker-5ccff6d658-s9fnj" Apr 17 23:38:55.649337 kubelet[2578]: I0417 23:38:55.649312 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-nginx-config\") pod \"whisker-5ccff6d658-s9fnj\" (UID: \"653de30e-add6-4842-ab39-9c2b0c910fb8\") " pod="calico-system/whisker-5ccff6d658-s9fnj" Apr 17 23:38:55.650413 kubelet[2578]: I0417 23:38:55.649437 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-ca-bundle\") pod \"whisker-5ccff6d658-s9fnj\" (UID: \"653de30e-add6-4842-ab39-9c2b0c910fb8\") " pod="calico-system/whisker-5ccff6d658-s9fnj" Apr 17 23:38:55.652762 kubelet[2578]: I0417 23:38:55.651768 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-backend-key-pair\") pod \"whisker-5ccff6d658-s9fnj\" (UID: \"653de30e-add6-4842-ab39-9c2b0c910fb8\") " pod="calico-system/whisker-5ccff6d658-s9fnj" Apr 17 23:38:55.686083 systemd[1]: Created slice kubepods-besteffort-pod653de30e_add6_4842_ab39_9c2b0c910fb8.slice - libcontainer container kubepods-besteffort-pod653de30e_add6_4842_ab39_9c2b0c910fb8.slice. Apr 17 23:38:55.838672 containerd[1457]: time="2026-04-17T23:38:55.838506973Z" level=error msg="Failed to destroy network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.839275 containerd[1457]: time="2026-04-17T23:38:55.839232068Z" level=error msg="encountered an error cleaning up failed sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.839483 containerd[1457]: time="2026-04-17T23:38:55.839425506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq7p4,Uid:52277689-f4f8-4eb4-acdf-589f30ebdb48,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.839938 kubelet[2578]: E0417 23:38:55.839886 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.840246 kubelet[2578]: E0417 23:38:55.840215 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq7p4" Apr 17 23:38:55.840405 kubelet[2578]: E0417 23:38:55.840378 2578 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zq7p4" Apr 17 23:38:55.841377 kubelet[2578]: E0417 23:38:55.841073 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zq7p4_calico-system(52277689-f4f8-4eb4-acdf-589f30ebdb48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zq7p4_calico-system(52277689-f4f8-4eb4-acdf-589f30ebdb48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:55.871030 containerd[1457]: time="2026-04-17T23:38:55.870968860Z" level=error msg="Failed to destroy network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.871729 containerd[1457]: time="2026-04-17T23:38:55.871678111Z" level=error msg="encountered an error cleaning up failed sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.871962 containerd[1457]: time="2026-04-17T23:38:55.871922210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c6pfx,Uid:6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.872394 kubelet[2578]: E0417 23:38:55.872351 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.872734 kubelet[2578]: E0417 23:38:55.872656 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c6pfx" Apr 17 23:38:55.873047 kubelet[2578]: E0417 23:38:55.872905 2578 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c6pfx" Apr 17 23:38:55.873919 kubelet[2578]: E0417 23:38:55.873225 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-c6pfx_kube-system(6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-c6pfx_kube-system(6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-c6pfx" podUID="6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f" Apr 17 23:38:55.876463 containerd[1457]: time="2026-04-17T23:38:55.876390848Z" level=error msg="Failed to destroy network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.876899 containerd[1457]: time="2026-04-17T23:38:55.876853635Z" level=error msg="encountered an error cleaning up failed sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.876994 containerd[1457]: time="2026-04-17T23:38:55.876936347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fkclr,Uid:21f2939a-2dbb-4eca-a507-d3f15555c474,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.877248 kubelet[2578]: E0417 23:38:55.877205 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:55.877335 kubelet[2578]: E0417 23:38:55.877271 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-fkclr" Apr 17 23:38:55.877335 kubelet[2578]: E0417 23:38:55.877301 2578 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-fkclr" Apr 17 23:38:55.877466 kubelet[2578]: E0417 23:38:55.877376 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-fkclr_kube-system(21f2939a-2dbb-4eca-a507-d3f15555c474)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-fkclr_kube-system(21f2939a-2dbb-4eca-a507-d3f15555c474)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-fkclr" podUID="21f2939a-2dbb-4eca-a507-d3f15555c474" Apr 17 23:38:55.890285 containerd[1457]: time="2026-04-17T23:38:55.890235394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d986d88-pzk8v,Uid:41734535-6436-46fc-9937-84a76aab1f06,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:55.921474 containerd[1457]: time="2026-04-17T23:38:55.918540619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664b8d97f-trlct,Uid:91de5af7-c91f-4e46-b3e9-42f53f3c3734,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:55.938485 containerd[1457]: time="2026-04-17T23:38:55.937875927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664b8d97f-bbdfw,Uid:7a296da7-7b92-4245-a4f5-9775c1f8a482,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:55.947986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530-shm.mount: Deactivated successfully. Apr 17 23:38:56.002635 containerd[1457]: time="2026-04-17T23:38:56.002424737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-pkztx,Uid:1c2398f0-91d9-434e-8477-385776513cc3,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:56.010977 containerd[1457]: time="2026-04-17T23:38:56.010917526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ccff6d658-s9fnj,Uid:653de30e-add6-4842-ab39-9c2b0c910fb8,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:56.171920 containerd[1457]: time="2026-04-17T23:38:56.171808147Z" level=error msg="Failed to destroy network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.172274 containerd[1457]: time="2026-04-17T23:38:56.172228151Z" level=error msg="encountered an error cleaning up failed sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.172365 containerd[1457]: time="2026-04-17T23:38:56.172310976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d986d88-pzk8v,Uid:41734535-6436-46fc-9937-84a76aab1f06,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.172691 kubelet[2578]: E0417 23:38:56.172636 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.174753 kubelet[2578]: E0417 23:38:56.173355 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7d986d88-pzk8v" Apr 17 23:38:56.174753 kubelet[2578]: E0417 23:38:56.173468 2578 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7d986d88-pzk8v" Apr 17 23:38:56.174753 kubelet[2578]: E0417 23:38:56.174538 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f7d986d88-pzk8v_calico-system(41734535-6436-46fc-9937-84a76aab1f06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f7d986d88-pzk8v_calico-system(41734535-6436-46fc-9937-84a76aab1f06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f7d986d88-pzk8v" podUID="41734535-6436-46fc-9937-84a76aab1f06" Apr 17 23:38:56.258147 containerd[1457]: time="2026-04-17T23:38:56.258083834Z" level=error msg="Failed to destroy network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.259624 containerd[1457]: time="2026-04-17T23:38:56.258574811Z" level=error msg="encountered an error cleaning up failed sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.259624 containerd[1457]: time="2026-04-17T23:38:56.258662801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-pkztx,Uid:1c2398f0-91d9-434e-8477-385776513cc3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.260641 kubelet[2578]: E0417 23:38:56.258962 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.260641 kubelet[2578]: E0417 23:38:56.259031 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-pkztx" Apr 17 23:38:56.260641 kubelet[2578]: E0417 23:38:56.259084 2578 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-pkztx" Apr 17 23:38:56.260833 kubelet[2578]: E0417 23:38:56.259167 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-pkztx_calico-system(1c2398f0-91d9-434e-8477-385776513cc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-pkztx_calico-system(1c2398f0-91d9-434e-8477-385776513cc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-pkztx" podUID="1c2398f0-91d9-434e-8477-385776513cc3" Apr 17 23:38:56.262564 containerd[1457]: time="2026-04-17T23:38:56.262521653Z" level=error msg="Failed to destroy network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.263144 containerd[1457]: time="2026-04-17T23:38:56.263102557Z" level=error msg="encountered an error cleaning up failed sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.263490 containerd[1457]: time="2026-04-17T23:38:56.263406358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664b8d97f-bbdfw,Uid:7a296da7-7b92-4245-a4f5-9775c1f8a482,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.264105 kubelet[2578]: E0417 23:38:56.263860 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.264105 kubelet[2578]: E0417 23:38:56.263929 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5664b8d97f-bbdfw" Apr 17 23:38:56.264105 kubelet[2578]: E0417 23:38:56.263961 2578 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5664b8d97f-bbdfw" Apr 17 23:38:56.264321 kubelet[2578]: E0417 23:38:56.264029 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664b8d97f-bbdfw_calico-system(7a296da7-7b92-4245-a4f5-9775c1f8a482)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664b8d97f-bbdfw_calico-system(7a296da7-7b92-4245-a4f5-9775c1f8a482)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5664b8d97f-bbdfw" podUID="7a296da7-7b92-4245-a4f5-9775c1f8a482" Apr 17 23:38:56.276498 containerd[1457]: time="2026-04-17T23:38:56.276024805Z" level=error msg="Failed to destroy network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.278126 containerd[1457]: time="2026-04-17T23:38:56.278070811Z" level=error msg="encountered an error cleaning up failed sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.278713 containerd[1457]: time="2026-04-17T23:38:56.278578516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664b8d97f-trlct,Uid:91de5af7-c91f-4e46-b3e9-42f53f3c3734,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.280697 kubelet[2578]: E0417 23:38:56.280256 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.280697 kubelet[2578]: E0417 23:38:56.280351 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5664b8d97f-trlct" Apr 17 23:38:56.280697 kubelet[2578]: E0417 23:38:56.280384 2578 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5664b8d97f-trlct" Apr 17 23:38:56.280935 kubelet[2578]: E0417 23:38:56.280475 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664b8d97f-trlct_calico-system(91de5af7-c91f-4e46-b3e9-42f53f3c3734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664b8d97f-trlct_calico-system(91de5af7-c91f-4e46-b3e9-42f53f3c3734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5664b8d97f-trlct" podUID="91de5af7-c91f-4e46-b3e9-42f53f3c3734" Apr 17 23:38:56.282553 containerd[1457]: time="2026-04-17T23:38:56.282485782Z" level=error msg="Failed to destroy network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.283788 containerd[1457]: time="2026-04-17T23:38:56.283741205Z" level=error msg="encountered an error cleaning up failed sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.283927 containerd[1457]: time="2026-04-17T23:38:56.283817856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ccff6d658-s9fnj,Uid:653de30e-add6-4842-ab39-9c2b0c910fb8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.284187 kubelet[2578]: E0417 23:38:56.284054 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.284187 kubelet[2578]: E0417 23:38:56.284117 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ccff6d658-s9fnj" Apr 17 23:38:56.284187 kubelet[2578]: E0417 23:38:56.284145 2578 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ccff6d658-s9fnj" Apr 17 23:38:56.284485 kubelet[2578]: E0417 23:38:56.284227 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5ccff6d658-s9fnj_calico-system(653de30e-add6-4842-ab39-9c2b0c910fb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5ccff6d658-s9fnj_calico-system(653de30e-add6-4842-ab39-9c2b0c910fb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ccff6d658-s9fnj" podUID="653de30e-add6-4842-ab39-9c2b0c910fb8" Apr 17 23:38:56.451399 kubelet[2578]: I0417 23:38:56.449332 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:38:56.451571 containerd[1457]: time="2026-04-17T23:38:56.450649336Z" level=info msg="StopPodSandbox for \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\"" Apr 17 23:38:56.451571 containerd[1457]: time="2026-04-17T23:38:56.450943607Z" level=info msg="Ensure that sandbox f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530 in task-service has been cleanup successfully" Apr 17 23:38:56.466282 kubelet[2578]: I0417 23:38:56.466246 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:38:56.472807 containerd[1457]: time="2026-04-17T23:38:56.472731379Z" level=info msg="StopPodSandbox for \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\"" Apr 17 23:38:56.478561 containerd[1457]: time="2026-04-17T23:38:56.478504639Z" level=info msg="Ensure that sandbox 279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248 in task-service has been cleanup successfully" Apr 17 23:38:56.493952 kubelet[2578]: I0417 23:38:56.493144 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:38:56.495558 containerd[1457]: time="2026-04-17T23:38:56.494162401Z" level=info msg="StopPodSandbox for \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\"" Apr 17 23:38:56.495558 containerd[1457]: time="2026-04-17T23:38:56.494806893Z" level=info msg="Ensure that sandbox 55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552 in task-service has been cleanup successfully" Apr 17 23:38:56.506485 kubelet[2578]: I0417 23:38:56.506348 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:38:56.510491 containerd[1457]: time="2026-04-17T23:38:56.509524478Z" level=info msg="StopPodSandbox for \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\"" Apr 17 23:38:56.518578 containerd[1457]: time="2026-04-17T23:38:56.518511781Z" level=info msg="Ensure that sandbox d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7 in task-service has been cleanup successfully" Apr 17 23:38:56.530336 containerd[1457]: time="2026-04-17T23:38:56.530270745Z" level=info msg="CreateContainer within sandbox \"c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:38:56.539992 kubelet[2578]: I0417 23:38:56.539066 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:38:56.543950 containerd[1457]: time="2026-04-17T23:38:56.543883061Z" level=info msg="StopPodSandbox for \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\"" Apr 17 23:38:56.545444 containerd[1457]: time="2026-04-17T23:38:56.545408727Z" level=info msg="Ensure that sandbox bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459 in task-service has been cleanup successfully" Apr 17 23:38:56.559200 kubelet[2578]: I0417 23:38:56.559155 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:38:56.560542 containerd[1457]: time="2026-04-17T23:38:56.560468701Z" level=info msg="StopPodSandbox for \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\"" Apr 17 23:38:56.561408 containerd[1457]: time="2026-04-17T23:38:56.561023928Z" level=info msg="Ensure that sandbox 303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560 in task-service has been cleanup successfully" Apr 17 23:38:56.577025 kubelet[2578]: I0417 23:38:56.576034 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:38:56.578024 containerd[1457]: time="2026-04-17T23:38:56.577977242Z" level=info msg="StopPodSandbox for \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\"" Apr 17 23:38:56.578316 containerd[1457]: time="2026-04-17T23:38:56.578270796Z" level=info msg="Ensure that sandbox ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c in task-service has been cleanup successfully" Apr 17 23:38:56.590481 kubelet[2578]: I0417 23:38:56.589779 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:38:56.592755 containerd[1457]: time="2026-04-17T23:38:56.592702026Z" level=info msg="StopPodSandbox for \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\"" Apr 17 23:38:56.594623 containerd[1457]: time="2026-04-17T23:38:56.594152010Z" level=info msg="Ensure that sandbox 713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878 in task-service has been cleanup successfully" Apr 17 23:38:56.625468 containerd[1457]: time="2026-04-17T23:38:56.625383215Z" level=info msg="CreateContainer within sandbox \"c7746e6107af1e89c692929783c898e0bb8e828cf061531cbe81ed0a13590a1b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"01d940d31587c96fce04f17239674f2666d5877a8a51e75c1bc99aeff5446370\"" Apr 17 23:38:56.625897 containerd[1457]: time="2026-04-17T23:38:56.625850106Z" level=error msg="StopPodSandbox for \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\" failed" error="failed to destroy network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.626839 kubelet[2578]: E0417 23:38:56.626340 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:38:56.626839 kubelet[2578]: E0417 23:38:56.626401 2578 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530"} Apr 17 23:38:56.626839 kubelet[2578]: E0417 23:38:56.626607 2578 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52277689-f4f8-4eb4-acdf-589f30ebdb48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:56.626839 kubelet[2578]: E0417 23:38:56.626646 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52277689-f4f8-4eb4-acdf-589f30ebdb48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zq7p4" podUID="52277689-f4f8-4eb4-acdf-589f30ebdb48" Apr 17 23:38:56.627899 containerd[1457]: time="2026-04-17T23:38:56.627759229Z" level=info msg="StartContainer for \"01d940d31587c96fce04f17239674f2666d5877a8a51e75c1bc99aeff5446370\"" Apr 17 23:38:56.711963 containerd[1457]: time="2026-04-17T23:38:56.711636505Z" level=error msg="StopPodSandbox for \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\" failed" error="failed to destroy network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.713763 kubelet[2578]: E0417 23:38:56.713674 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:38:56.713763 kubelet[2578]: E0417 23:38:56.713751 2578 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459"} Apr 17 23:38:56.714101 kubelet[2578]: E0417 23:38:56.713806 2578 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c2398f0-91d9-434e-8477-385776513cc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:56.714101 kubelet[2578]: E0417 23:38:56.713878 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c2398f0-91d9-434e-8477-385776513cc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-pkztx" podUID="1c2398f0-91d9-434e-8477-385776513cc3" Apr 17 23:38:56.740317 containerd[1457]: time="2026-04-17T23:38:56.739635671Z" level=error msg="StopPodSandbox for \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\" failed" error="failed to destroy network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.740530 kubelet[2578]: E0417 23:38:56.740004 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:38:56.740530 kubelet[2578]: E0417 23:38:56.740070 2578 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c"} Apr 17 23:38:56.740530 kubelet[2578]: E0417 23:38:56.740118 2578 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21f2939a-2dbb-4eca-a507-d3f15555c474\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:56.740530 kubelet[2578]: E0417 23:38:56.740160 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21f2939a-2dbb-4eca-a507-d3f15555c474\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-fkclr" podUID="21f2939a-2dbb-4eca-a507-d3f15555c474" Apr 17 23:38:56.744338 containerd[1457]: time="2026-04-17T23:38:56.744265499Z" level=error msg="StopPodSandbox for \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\" failed" error="failed to destroy network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.745073 kubelet[2578]: E0417 23:38:56.744840 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:38:56.745073 kubelet[2578]: E0417 23:38:56.744906 2578 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552"} Apr 17 23:38:56.745073 kubelet[2578]: E0417 23:38:56.744953 2578 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41734535-6436-46fc-9937-84a76aab1f06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:56.745073 kubelet[2578]: E0417 23:38:56.745007 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41734535-6436-46fc-9937-84a76aab1f06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f7d986d88-pzk8v" podUID="41734535-6436-46fc-9937-84a76aab1f06" Apr 17 23:38:56.755336 containerd[1457]: time="2026-04-17T23:38:56.755244012Z" level=error msg="StopPodSandbox for \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\" failed" error="failed to destroy network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.755652 kubelet[2578]: E0417 23:38:56.755598 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:38:56.755857 kubelet[2578]: E0417 23:38:56.755826 2578 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248"} Apr 17 23:38:56.756093 kubelet[2578]: E0417 23:38:56.755981 2578 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91de5af7-c91f-4e46-b3e9-42f53f3c3734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:56.756093 kubelet[2578]: E0417 23:38:56.756032 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91de5af7-c91f-4e46-b3e9-42f53f3c3734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5664b8d97f-trlct" podUID="91de5af7-c91f-4e46-b3e9-42f53f3c3734" Apr 17 23:38:56.778711 systemd[1]: Started cri-containerd-01d940d31587c96fce04f17239674f2666d5877a8a51e75c1bc99aeff5446370.scope - libcontainer container 01d940d31587c96fce04f17239674f2666d5877a8a51e75c1bc99aeff5446370. Apr 17 23:38:56.780641 kubelet[2578]: E0417 23:38:56.779281 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:38:56.780641 kubelet[2578]: E0417 23:38:56.779336 2578 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7"} Apr 17 23:38:56.780641 kubelet[2578]: E0417 23:38:56.779651 2578 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"653de30e-add6-4842-ab39-9c2b0c910fb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:56.780641 kubelet[2578]: E0417 23:38:56.779776 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"653de30e-add6-4842-ab39-9c2b0c910fb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ccff6d658-s9fnj" podUID="653de30e-add6-4842-ab39-9c2b0c910fb8" Apr 17 23:38:56.780970 containerd[1457]: time="2026-04-17T23:38:56.779040058Z" level=error msg="StopPodSandbox for \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\" failed" error="failed to destroy network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.815252 containerd[1457]: time="2026-04-17T23:38:56.815175326Z" level=error msg="StopPodSandbox for \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\" failed" error="failed to destroy network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.815753 containerd[1457]: time="2026-04-17T23:38:56.815710224Z" level=error msg="StopPodSandbox for \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\" failed" error="failed to destroy network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:56.816510 kubelet[2578]: E0417 23:38:56.816194 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:38:56.816510 kubelet[2578]: E0417 23:38:56.816268 2578 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560"} Apr 17 23:38:56.816510 kubelet[2578]: E0417 23:38:56.816318 2578 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a296da7-7b92-4245-a4f5-9775c1f8a482\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:56.816510 kubelet[2578]: E0417 23:38:56.816365 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a296da7-7b92-4245-a4f5-9775c1f8a482\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5664b8d97f-bbdfw" podUID="7a296da7-7b92-4245-a4f5-9775c1f8a482" Apr 17 23:38:56.817700 kubelet[2578]: E0417 23:38:56.816931 2578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:38:56.817700 kubelet[2578]: E0417 23:38:56.816989 2578 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878"} Apr 17 23:38:56.817700 kubelet[2578]: E0417 23:38:56.817031 2578 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:56.817700 kubelet[2578]: E0417 23:38:56.817068 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-c6pfx" podUID="6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f" Apr 17 23:38:56.836336 containerd[1457]: time="2026-04-17T23:38:56.836269892Z" level=info msg="StartContainer for \"01d940d31587c96fce04f17239674f2666d5877a8a51e75c1bc99aeff5446370\" returns successfully" Apr 17 23:38:56.926987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7-shm.mount: Deactivated successfully. Apr 17 23:38:56.927557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560-shm.mount: Deactivated successfully. Apr 17 23:38:56.927805 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248-shm.mount: Deactivated successfully. Apr 17 23:38:56.927879 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552-shm.mount: Deactivated successfully. Apr 17 23:38:57.598325 containerd[1457]: time="2026-04-17T23:38:57.598277519Z" level=info msg="StopPodSandbox for \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\"" Apr 17 23:38:57.653222 kubelet[2578]: I0417 23:38:57.652353 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-6fk2w" podStartSLOduration=2.598989411 podStartE2EDuration="22.652329656s" podCreationTimestamp="2026-04-17 23:38:35 +0000 UTC" firstStartedPulling="2026-04-17 23:38:36.414520536 +0000 UTC m=+24.386520377" lastFinishedPulling="2026-04-17 23:38:56.467860756 +0000 UTC m=+44.439860622" observedRunningTime="2026-04-17 23:38:57.649704834 +0000 UTC m=+45.621704700" watchObservedRunningTime="2026-04-17 23:38:57.652329656 +0000 UTC m=+45.624329527" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.680 [INFO][3894] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.680 [INFO][3894] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" iface="eth0" netns="/var/run/netns/cni-dd523f8b-b93b-65f8-4598-2be28e3239fd" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.683 [INFO][3894] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" iface="eth0" netns="/var/run/netns/cni-dd523f8b-b93b-65f8-4598-2be28e3239fd" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.684 [INFO][3894] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" iface="eth0" netns="/var/run/netns/cni-dd523f8b-b93b-65f8-4598-2be28e3239fd" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.684 [INFO][3894] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.684 [INFO][3894] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.716 [INFO][3902] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" HandleID="k8s-pod-network.d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.716 [INFO][3902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.716 [INFO][3902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.726 [WARNING][3902] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" HandleID="k8s-pod-network.d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.726 [INFO][3902] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" HandleID="k8s-pod-network.d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.728 [INFO][3902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:57.734266 containerd[1457]: 2026-04-17 23:38:57.732 [INFO][3894] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:38:57.736149 containerd[1457]: time="2026-04-17T23:38:57.735354980Z" level=info msg="TearDown network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\" successfully" Apr 17 23:38:57.736149 containerd[1457]: time="2026-04-17T23:38:57.735400273Z" level=info msg="StopPodSandbox for \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\" returns successfully" Apr 17 23:38:57.739538 systemd[1]: run-netns-cni\x2ddd523f8b\x2db93b\x2d65f8\x2d4598\x2d2be28e3239fd.mount: Deactivated successfully. Apr 17 23:38:57.772945 kubelet[2578]: I0417 23:38:57.772875 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-nginx-config\" (UniqueName: \"kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-nginx-config\") pod \"653de30e-add6-4842-ab39-9c2b0c910fb8\" (UID: \"653de30e-add6-4842-ab39-9c2b0c910fb8\") " Apr 17 23:38:57.772945 kubelet[2578]: I0417 23:38:57.772945 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-backend-key-pair\") pod \"653de30e-add6-4842-ab39-9c2b0c910fb8\" (UID: \"653de30e-add6-4842-ab39-9c2b0c910fb8\") " Apr 17 23:38:57.773228 kubelet[2578]: I0417 23:38:57.773006 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-ca-bundle\") pod \"653de30e-add6-4842-ab39-9c2b0c910fb8\" (UID: \"653de30e-add6-4842-ab39-9c2b0c910fb8\") " Apr 17 23:38:57.773228 kubelet[2578]: I0417 23:38:57.773058 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/653de30e-add6-4842-ab39-9c2b0c910fb8-kube-api-access-ddngr\" (UniqueName: \"kubernetes.io/projected/653de30e-add6-4842-ab39-9c2b0c910fb8-kube-api-access-ddngr\") pod \"653de30e-add6-4842-ab39-9c2b0c910fb8\" (UID: \"653de30e-add6-4842-ab39-9c2b0c910fb8\") " Apr 17 23:38:57.780482 kubelet[2578]: I0417 23:38:57.778185 2578 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-backend-key-pair" pod "653de30e-add6-4842-ab39-9c2b0c910fb8" (UID: "653de30e-add6-4842-ab39-9c2b0c910fb8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:38:57.780482 kubelet[2578]: I0417 23:38:57.778721 2578 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-nginx-config" pod "653de30e-add6-4842-ab39-9c2b0c910fb8" (UID: "653de30e-add6-4842-ab39-9c2b0c910fb8"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:38:57.780482 kubelet[2578]: I0417 23:38:57.779213 2578 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-ca-bundle" pod "653de30e-add6-4842-ab39-9c2b0c910fb8" (UID: "653de30e-add6-4842-ab39-9c2b0c910fb8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:38:57.780912 kubelet[2578]: I0417 23:38:57.780856 2578 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653de30e-add6-4842-ab39-9c2b0c910fb8-kube-api-access-ddngr" pod "653de30e-add6-4842-ab39-9c2b0c910fb8" (UID: "653de30e-add6-4842-ab39-9c2b0c910fb8"). InnerVolumeSpecName "kube-api-access-ddngr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:38:57.783245 systemd[1]: var-lib-kubelet-pods-653de30e\x2dadd6\x2d4842\x2dab39\x2d9c2b0c910fb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dddngr.mount: Deactivated successfully. Apr 17 23:38:57.783413 systemd[1]: var-lib-kubelet-pods-653de30e\x2dadd6\x2d4842\x2dab39\x2d9c2b0c910fb8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:38:57.874097 kubelet[2578]: I0417 23:38:57.873925 2578 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-nginx-config\") on node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" DevicePath \"\"" Apr 17 23:38:57.874097 kubelet[2578]: I0417 23:38:57.873982 2578 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" DevicePath \"\"" Apr 17 23:38:57.874097 kubelet[2578]: I0417 23:38:57.874001 2578 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/653de30e-add6-4842-ab39-9c2b0c910fb8-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" DevicePath \"\"" Apr 17 23:38:57.874097 kubelet[2578]: I0417 23:38:57.874015 2578 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddngr\" (UniqueName: \"kubernetes.io/projected/653de30e-add6-4842-ab39-9c2b0c910fb8-kube-api-access-ddngr\") on node \"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1\" DevicePath \"\"" Apr 17 23:38:58.182268 systemd[1]: run-containerd-runc-k8s.io-01d940d31587c96fce04f17239674f2666d5877a8a51e75c1bc99aeff5446370-runc.bcNjik.mount: Deactivated successfully. Apr 17 23:38:58.246319 systemd[1]: Removed slice kubepods-besteffort-pod653de30e_add6_4842_ab39_9c2b0c910fb8.slice - libcontainer container kubepods-besteffort-pod653de30e_add6_4842_ab39_9c2b0c910fb8.slice. Apr 17 23:38:58.710636 systemd[1]: Created slice kubepods-besteffort-podf860fc2e_3d19_4e9f_b0fc_b207c48cc2e0.slice - libcontainer container kubepods-besteffort-podf860fc2e_3d19_4e9f_b0fc_b207c48cc2e0.slice. Apr 17 23:38:58.786889 kubelet[2578]: I0417 23:38:58.786838 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0-whisker-ca-bundle\") pod \"whisker-584c6744c-5ss52\" (UID: \"f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0\") " pod="calico-system/whisker-584c6744c-5ss52" Apr 17 23:38:58.790961 kubelet[2578]: I0417 23:38:58.790692 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htv5\" (UniqueName: \"kubernetes.io/projected/f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0-kube-api-access-6htv5\") pod \"whisker-584c6744c-5ss52\" (UID: \"f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0\") " pod="calico-system/whisker-584c6744c-5ss52" Apr 17 23:38:58.790961 kubelet[2578]: I0417 23:38:58.790767 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0-whisker-backend-key-pair\") pod \"whisker-584c6744c-5ss52\" (UID: \"f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0\") " pod="calico-system/whisker-584c6744c-5ss52" Apr 17 23:38:58.790961 kubelet[2578]: I0417 23:38:58.790799 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0-nginx-config\") pod \"whisker-584c6744c-5ss52\" (UID: \"f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0\") " pod="calico-system/whisker-584c6744c-5ss52" Apr 17 23:38:58.799403 systemd[1]: run-containerd-runc-k8s.io-01d940d31587c96fce04f17239674f2666d5877a8a51e75c1bc99aeff5446370-runc.VJRol6.mount: Deactivated successfully. Apr 17 23:38:59.024097 containerd[1457]: time="2026-04-17T23:38:59.023091009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-584c6744c-5ss52,Uid:f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:59.301486 kernel: calico-node[4019]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:38:59.304033 systemd-networkd[1365]: cali478ebb1ff91: Link UP Apr 17 23:38:59.304372 systemd-networkd[1365]: cali478ebb1ff91: Gained carrier Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.115 [INFO][4076] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0 whisker-584c6744c- calico-system f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0 958 0 2026-04-17 23:38:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:584c6744c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1 whisker-584c6744c-5ss52 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali478ebb1ff91 [] [] }} ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Namespace="calico-system" Pod="whisker-584c6744c-5ss52" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.115 [INFO][4076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Namespace="calico-system" Pod="whisker-584c6744c-5ss52" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.199 [INFO][4090] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" HandleID="k8s-pod-network.3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.220 [INFO][4090] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" HandleID="k8s-pod-network.3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277e60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", "pod":"whisker-584c6744c-5ss52", "timestamp":"2026-04-17 23:38:59.199130109 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001142c0)} Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.220 [INFO][4090] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.220 [INFO][4090] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.220 [INFO][4090] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.223 [INFO][4090] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.232 [INFO][4090] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.239 [INFO][4090] ipam/ipam.go 526: Trying affinity for 192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.242 [INFO][4090] ipam/ipam.go 160: Attempting to load block cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.245 [INFO][4090] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.245 [INFO][4090] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.248 [INFO][4090] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414 Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.256 [INFO][4090] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.264 [INFO][4090] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.95.1/26] block=192.168.95.0/26 handle="k8s-pod-network.3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.265 [INFO][4090] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.95.1/26] handle="k8s-pod-network.3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.265 [INFO][4090] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:59.338053 containerd[1457]: 2026-04-17 23:38:59.265 [INFO][4090] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.95.1/26] IPv6=[] ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" HandleID="k8s-pod-network.3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" Apr 17 23:38:59.341434 containerd[1457]: 2026-04-17 23:38:59.269 [INFO][4076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Namespace="calico-system" Pod="whisker-584c6744c-5ss52" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0", GenerateName:"whisker-584c6744c-", Namespace:"calico-system", SelfLink:"", UID:"f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"584c6744c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"", Pod:"whisker-584c6744c-5ss52", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali478ebb1ff91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:59.341434 containerd[1457]: 2026-04-17 23:38:59.269 [INFO][4076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.1/32] ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Namespace="calico-system" Pod="whisker-584c6744c-5ss52" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" Apr 17 23:38:59.341434 containerd[1457]: 2026-04-17 23:38:59.269 [INFO][4076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali478ebb1ff91 ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Namespace="calico-system" Pod="whisker-584c6744c-5ss52" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" Apr 17 23:38:59.341434 containerd[1457]: 2026-04-17 23:38:59.297 [INFO][4076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Namespace="calico-system" Pod="whisker-584c6744c-5ss52" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" Apr 17 23:38:59.341434 containerd[1457]: 2026-04-17 23:38:59.302 [INFO][4076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Namespace="calico-system" Pod="whisker-584c6744c-5ss52" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0", GenerateName:"whisker-584c6744c-", Namespace:"calico-system", SelfLink:"", UID:"f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"584c6744c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414", Pod:"whisker-584c6744c-5ss52", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali478ebb1ff91", MAC:"c2:55:7d:3e:38:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:59.341434 containerd[1457]: 2026-04-17 23:38:59.332 [INFO][4076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414" Namespace="calico-system" Pod="whisker-584c6744c-5ss52" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--584c6744c--5ss52-eth0" Apr 17 23:38:59.384156 containerd[1457]: time="2026-04-17T23:38:59.383673199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:59.386382 containerd[1457]: time="2026-04-17T23:38:59.386308135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:59.386591 containerd[1457]: time="2026-04-17T23:38:59.386397794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:59.386791 containerd[1457]: time="2026-04-17T23:38:59.386666224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:59.431728 systemd[1]: Started cri-containerd-3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414.scope - libcontainer container 3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414. Apr 17 23:38:59.507137 containerd[1457]: time="2026-04-17T23:38:59.507086760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-584c6744c-5ss52,Uid:f860fc2e-3d19-4e9f-b0fc-b207c48cc2e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414\"" Apr 17 23:38:59.509849 containerd[1457]: time="2026-04-17T23:38:59.509806791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:39:00.050888 systemd-networkd[1365]: vxlan.calico: Link UP Apr 17 23:39:00.050903 systemd-networkd[1365]: vxlan.calico: Gained carrier Apr 17 23:39:00.236677 kubelet[2578]: I0417 23:39:00.236626 2578 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="653de30e-add6-4842-ab39-9c2b0c910fb8" path="/var/lib/kubelet/pods/653de30e-add6-4842-ab39-9c2b0c910fb8/volumes" Apr 17 23:39:00.629148 systemd-networkd[1365]: cali478ebb1ff91: Gained IPv6LL Apr 17 23:39:00.724175 containerd[1457]: time="2026-04-17T23:39:00.724105894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:00.726039 containerd[1457]: time="2026-04-17T23:39:00.725785588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:39:00.727516 containerd[1457]: time="2026-04-17T23:39:00.727398685Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:00.731067 containerd[1457]: time="2026-04-17T23:39:00.730997287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:00.732231 containerd[1457]: time="2026-04-17T23:39:00.732186297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.222330561s" Apr 17 23:39:00.732339 containerd[1457]: time="2026-04-17T23:39:00.732236901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:39:00.739295 containerd[1457]: time="2026-04-17T23:39:00.739232861Z" level=info msg="CreateContainer within sandbox \"3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:39:00.764676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316814096.mount: Deactivated successfully. Apr 17 23:39:00.766681 containerd[1457]: time="2026-04-17T23:39:00.766624123Z" level=info msg="CreateContainer within sandbox \"3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8310780c75bf34f39ee29273f6d406b441e490e524b808a23d71cb8545770cc2\"" Apr 17 23:39:00.767722 containerd[1457]: time="2026-04-17T23:39:00.767675300Z" level=info msg="StartContainer for \"8310780c75bf34f39ee29273f6d406b441e490e524b808a23d71cb8545770cc2\"" Apr 17 23:39:00.826733 systemd[1]: Started cri-containerd-8310780c75bf34f39ee29273f6d406b441e490e524b808a23d71cb8545770cc2.scope - libcontainer container 8310780c75bf34f39ee29273f6d406b441e490e524b808a23d71cb8545770cc2. Apr 17 23:39:00.904951 containerd[1457]: time="2026-04-17T23:39:00.904790091Z" level=info msg="StartContainer for \"8310780c75bf34f39ee29273f6d406b441e490e524b808a23d71cb8545770cc2\" returns successfully" Apr 17 23:39:00.907139 containerd[1457]: time="2026-04-17T23:39:00.907096116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:39:01.268724 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Apr 17 23:39:02.288471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2267121931.mount: Deactivated successfully. Apr 17 23:39:02.315467 containerd[1457]: time="2026-04-17T23:39:02.315375977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:02.316973 containerd[1457]: time="2026-04-17T23:39:02.316768409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:39:02.318628 containerd[1457]: time="2026-04-17T23:39:02.318540497Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:02.322709 containerd[1457]: time="2026-04-17T23:39:02.322574506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:02.324145 containerd[1457]: time="2026-04-17T23:39:02.323911235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.416753578s" Apr 17 23:39:02.324145 containerd[1457]: time="2026-04-17T23:39:02.323964980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:39:02.330565 containerd[1457]: time="2026-04-17T23:39:02.330306238Z" level=info msg="CreateContainer within sandbox \"3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:39:02.353871 containerd[1457]: time="2026-04-17T23:39:02.353823192Z" level=info msg="CreateContainer within sandbox \"3ae57ab790779c6711ac01950186e5a5aed14342ad40233174992183ca709414\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"022fccfff16e1dbc8926ac6fe81a511bc936383d653f57ac8782fa51555ff23c\"" Apr 17 23:39:02.354725 containerd[1457]: time="2026-04-17T23:39:02.354684145Z" level=info msg="StartContainer for \"022fccfff16e1dbc8926ac6fe81a511bc936383d653f57ac8782fa51555ff23c\"" Apr 17 23:39:02.402684 systemd[1]: Started cri-containerd-022fccfff16e1dbc8926ac6fe81a511bc936383d653f57ac8782fa51555ff23c.scope - libcontainer container 022fccfff16e1dbc8926ac6fe81a511bc936383d653f57ac8782fa51555ff23c. Apr 17 23:39:02.463710 containerd[1457]: time="2026-04-17T23:39:02.463223589Z" level=info msg="StartContainer for \"022fccfff16e1dbc8926ac6fe81a511bc936383d653f57ac8782fa51555ff23c\" returns successfully" Apr 17 23:39:03.452881 ntpd[1421]: Listen normally on 7 vxlan.calico 192.168.95.0:123 Apr 17 23:39:03.453536 ntpd[1421]: 17 Apr 23:39:03 ntpd[1421]: Listen normally on 7 vxlan.calico 192.168.95.0:123 Apr 17 23:39:03.453536 ntpd[1421]: 17 Apr 23:39:03 ntpd[1421]: Listen normally on 8 cali478ebb1ff91 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:39:03.453536 ntpd[1421]: 17 Apr 23:39:03 ntpd[1421]: Listen normally on 9 vxlan.calico [fe80::64c8:e8ff:fe42:97ba%5]:123 Apr 17 23:39:03.453025 ntpd[1421]: Listen normally on 8 cali478ebb1ff91 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:39:03.453118 ntpd[1421]: Listen normally on 9 vxlan.calico [fe80::64c8:e8ff:fe42:97ba%5]:123 Apr 17 23:39:08.235202 containerd[1457]: time="2026-04-17T23:39:08.234307789Z" level=info msg="StopPodSandbox for \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\"" Apr 17 23:39:08.306995 kubelet[2578]: I0417 23:39:08.305251 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-584c6744c-5ss52" podStartSLOduration=7.489184835 podStartE2EDuration="10.305225664s" podCreationTimestamp="2026-04-17 23:38:58 +0000 UTC" firstStartedPulling="2026-04-17 23:38:59.50936633 +0000 UTC m=+47.481366173" lastFinishedPulling="2026-04-17 23:39:02.325407142 +0000 UTC m=+50.297407002" observedRunningTime="2026-04-17 23:39:02.641736997 +0000 UTC m=+50.613736867" watchObservedRunningTime="2026-04-17 23:39:08.305225664 +0000 UTC m=+56.277225528" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.303 [INFO][4376] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.303 [INFO][4376] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" iface="eth0" netns="/var/run/netns/cni-74edce7c-a061-fbc5-0e7c-e05cc53e858e" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.304 [INFO][4376] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" iface="eth0" netns="/var/run/netns/cni-74edce7c-a061-fbc5-0e7c-e05cc53e858e" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.306 [INFO][4376] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" iface="eth0" netns="/var/run/netns/cni-74edce7c-a061-fbc5-0e7c-e05cc53e858e" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.306 [INFO][4376] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.306 [INFO][4376] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.338 [INFO][4384] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" HandleID="k8s-pod-network.55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.338 [INFO][4384] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.338 [INFO][4384] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.348 [WARNING][4384] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" HandleID="k8s-pod-network.55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.348 [INFO][4384] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" HandleID="k8s-pod-network.55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.350 [INFO][4384] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:08.354373 containerd[1457]: 2026-04-17 23:39:08.352 [INFO][4376] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:08.357944 containerd[1457]: time="2026-04-17T23:39:08.354680579Z" level=info msg="TearDown network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\" successfully" Apr 17 23:39:08.357944 containerd[1457]: time="2026-04-17T23:39:08.354745929Z" level=info msg="StopPodSandbox for \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\" returns successfully" Apr 17 23:39:08.361910 containerd[1457]: time="2026-04-17T23:39:08.360734719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d986d88-pzk8v,Uid:41734535-6436-46fc-9937-84a76aab1f06,Namespace:calico-system,Attempt:1,}" Apr 17 23:39:08.361744 systemd[1]: run-netns-cni\x2d74edce7c\x2da061\x2dfbc5\x2d0e7c\x2de05cc53e858e.mount: Deactivated successfully. Apr 17 23:39:08.518836 systemd-networkd[1365]: cali2b4e128b214: Link UP Apr 17 23:39:08.521667 systemd-networkd[1365]: cali2b4e128b214: Gained carrier Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.429 [INFO][4392] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0 calico-kube-controllers-5f7d986d88- calico-system 41734535-6436-46fc-9937-84a76aab1f06 1002 0 2026-04-17 23:38:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f7d986d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1 calico-kube-controllers-5f7d986d88-pzk8v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2b4e128b214 [] [] }} ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Namespace="calico-system" Pod="calico-kube-controllers-5f7d986d88-pzk8v" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.429 [INFO][4392] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Namespace="calico-system" Pod="calico-kube-controllers-5f7d986d88-pzk8v" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.464 [INFO][4403] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" HandleID="k8s-pod-network.a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.475 [INFO][4403] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" HandleID="k8s-pod-network.a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", "pod":"calico-kube-controllers-5f7d986d88-pzk8v", "timestamp":"2026-04-17 23:39:08.464570402 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00052cf20)} Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.475 [INFO][4403] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.475 [INFO][4403] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.475 [INFO][4403] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.478 [INFO][4403] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.486 [INFO][4403] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.491 [INFO][4403] ipam/ipam.go 526: Trying affinity for 192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.494 [INFO][4403] ipam/ipam.go 160: Attempting to load block cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.497 [INFO][4403] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.497 [INFO][4403] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.499 [INFO][4403] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678 Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.503 [INFO][4403] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.512 [INFO][4403] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.95.2/26] block=192.168.95.0/26 handle="k8s-pod-network.a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.512 [INFO][4403] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.95.2/26] handle="k8s-pod-network.a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.512 [INFO][4403] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:08.550867 containerd[1457]: 2026-04-17 23:39:08.512 [INFO][4403] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.95.2/26] IPv6=[] ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" HandleID="k8s-pod-network.a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.556085 containerd[1457]: 2026-04-17 23:39:08.514 [INFO][4392] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Namespace="calico-system" Pod="calico-kube-controllers-5f7d986d88-pzk8v" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0", GenerateName:"calico-kube-controllers-5f7d986d88-", Namespace:"calico-system", SelfLink:"", UID:"41734535-6436-46fc-9937-84a76aab1f06", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7d986d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"", Pod:"calico-kube-controllers-5f7d986d88-pzk8v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b4e128b214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:08.556085 containerd[1457]: 2026-04-17 23:39:08.515 [INFO][4392] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.2/32] ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Namespace="calico-system" Pod="calico-kube-controllers-5f7d986d88-pzk8v" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.556085 containerd[1457]: 2026-04-17 23:39:08.515 [INFO][4392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b4e128b214 ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Namespace="calico-system" Pod="calico-kube-controllers-5f7d986d88-pzk8v" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.556085 containerd[1457]: 2026-04-17 23:39:08.523 [INFO][4392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Namespace="calico-system" Pod="calico-kube-controllers-5f7d986d88-pzk8v" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.556085 containerd[1457]: 2026-04-17 23:39:08.523 [INFO][4392] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Namespace="calico-system" Pod="calico-kube-controllers-5f7d986d88-pzk8v" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0", GenerateName:"calico-kube-controllers-5f7d986d88-", Namespace:"calico-system", SelfLink:"", UID:"41734535-6436-46fc-9937-84a76aab1f06", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7d986d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678", Pod:"calico-kube-controllers-5f7d986d88-pzk8v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b4e128b214", MAC:"2a:b2:c6:75:12:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:08.556085 containerd[1457]: 2026-04-17 23:39:08.540 [INFO][4392] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678" Namespace="calico-system" Pod="calico-kube-controllers-5f7d986d88-pzk8v" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:08.604370 containerd[1457]: time="2026-04-17T23:39:08.603774925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:08.604370 containerd[1457]: time="2026-04-17T23:39:08.603877097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:08.604663 containerd[1457]: time="2026-04-17T23:39:08.604170696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:08.604663 containerd[1457]: time="2026-04-17T23:39:08.604342800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:08.667038 systemd[1]: Started cri-containerd-a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678.scope - libcontainer container a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678. Apr 17 23:39:08.745345 containerd[1457]: time="2026-04-17T23:39:08.745288613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7d986d88-pzk8v,Uid:41734535-6436-46fc-9937-84a76aab1f06,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678\"" Apr 17 23:39:08.748548 containerd[1457]: time="2026-04-17T23:39:08.748469717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:39:09.234058 containerd[1457]: time="2026-04-17T23:39:09.233517332Z" level=info msg="StopPodSandbox for \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\"" Apr 17 23:39:09.235492 containerd[1457]: time="2026-04-17T23:39:09.234719247Z" level=info msg="StopPodSandbox for \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\"" Apr 17 23:39:09.363233 systemd[1]: run-containerd-runc-k8s.io-a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678-runc.HbzZ1R.mount: Deactivated successfully. Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.337 [INFO][4494] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.341 [INFO][4494] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" iface="eth0" netns="/var/run/netns/cni-e88f3234-802b-d1eb-1c5e-650bc42b217f" Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.343 [INFO][4494] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" iface="eth0" netns="/var/run/netns/cni-e88f3234-802b-d1eb-1c5e-650bc42b217f" Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.344 [INFO][4494] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" iface="eth0" netns="/var/run/netns/cni-e88f3234-802b-d1eb-1c5e-650bc42b217f" Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.344 [INFO][4494] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.344 [INFO][4494] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.389 [INFO][4510] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" HandleID="k8s-pod-network.279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.390 [INFO][4510] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.390 [INFO][4510] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.400 [WARNING][4510] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" HandleID="k8s-pod-network.279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.400 [INFO][4510] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" HandleID="k8s-pod-network.279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.402 [INFO][4510] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:09.408542 containerd[1457]: 2026-04-17 23:39:09.404 [INFO][4494] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:09.413100 containerd[1457]: time="2026-04-17T23:39:09.411434157Z" level=info msg="TearDown network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\" successfully" Apr 17 23:39:09.413100 containerd[1457]: time="2026-04-17T23:39:09.411512308Z" level=info msg="StopPodSandbox for \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\" returns successfully" Apr 17 23:39:09.414506 systemd[1]: run-netns-cni\x2de88f3234\x2d802b\x2dd1eb\x2d1c5e\x2d650bc42b217f.mount: Deactivated successfully. Apr 17 23:39:09.416984 containerd[1457]: time="2026-04-17T23:39:09.416945117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664b8d97f-trlct,Uid:91de5af7-c91f-4e46-b3e9-42f53f3c3734,Namespace:calico-system,Attempt:1,}" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.339 [INFO][4495] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.339 [INFO][4495] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" iface="eth0" netns="/var/run/netns/cni-49c22e46-0e1d-0b2f-9fbd-a50211788882" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.340 [INFO][4495] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" iface="eth0" netns="/var/run/netns/cni-49c22e46-0e1d-0b2f-9fbd-a50211788882" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.341 [INFO][4495] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" iface="eth0" netns="/var/run/netns/cni-49c22e46-0e1d-0b2f-9fbd-a50211788882" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.341 [INFO][4495] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.341 [INFO][4495] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.394 [INFO][4508] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" HandleID="k8s-pod-network.f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.394 [INFO][4508] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.402 [INFO][4508] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.421 [WARNING][4508] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" HandleID="k8s-pod-network.f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.421 [INFO][4508] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" HandleID="k8s-pod-network.f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.423 [INFO][4508] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:09.428488 containerd[1457]: 2026-04-17 23:39:09.426 [INFO][4495] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:09.430245 containerd[1457]: time="2026-04-17T23:39:09.428638603Z" level=info msg="TearDown network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\" successfully" Apr 17 23:39:09.430245 containerd[1457]: time="2026-04-17T23:39:09.428670054Z" level=info msg="StopPodSandbox for \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\" returns successfully" Apr 17 23:39:09.436503 containerd[1457]: time="2026-04-17T23:39:09.434679297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq7p4,Uid:52277689-f4f8-4eb4-acdf-589f30ebdb48,Namespace:calico-system,Attempt:1,}" Apr 17 23:39:09.436893 systemd[1]: run-netns-cni\x2d49c22e46\x2d0e1d\x2d0b2f\x2d9fbd\x2da50211788882.mount: Deactivated successfully. Apr 17 23:39:09.590562 systemd-networkd[1365]: cali2b4e128b214: Gained IPv6LL Apr 17 23:39:09.749230 systemd-networkd[1365]: cali9554e5c73d5: Link UP Apr 17 23:39:09.750982 systemd-networkd[1365]: cali9554e5c73d5: Gained carrier Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.542 [INFO][4523] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0 calico-apiserver-5664b8d97f- calico-system 91de5af7-c91f-4e46-b3e9-42f53f3c3734 1011 0 2026-04-17 23:38:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5664b8d97f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1 calico-apiserver-5664b8d97f-trlct eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9554e5c73d5 [] [] }} ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-trlct" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.542 [INFO][4523] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-trlct" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.648 [INFO][4552] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" HandleID="k8s-pod-network.b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.676 [INFO][4552] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" HandleID="k8s-pod-network.b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdde0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", "pod":"calico-apiserver-5664b8d97f-trlct", "timestamp":"2026-04-17 23:39:09.648591522 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003774a0)} Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.676 [INFO][4552] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.676 [INFO][4552] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.676 [INFO][4552] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.683 [INFO][4552] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.692 [INFO][4552] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.703 [INFO][4552] ipam/ipam.go 526: Trying affinity for 192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.707 [INFO][4552] ipam/ipam.go 160: Attempting to load block cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.711 [INFO][4552] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.712 [INFO][4552] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.715 [INFO][4552] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.722 [INFO][4552] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.736 [INFO][4552] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.95.3/26] block=192.168.95.0/26 handle="k8s-pod-network.b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.736 [INFO][4552] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.95.3/26] handle="k8s-pod-network.b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.736 [INFO][4552] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:09.782117 containerd[1457]: 2026-04-17 23:39:09.736 [INFO][4552] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.95.3/26] IPv6=[] ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" HandleID="k8s-pod-network.b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.785088 containerd[1457]: 2026-04-17 23:39:09.742 [INFO][4523] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-trlct" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0", GenerateName:"calico-apiserver-5664b8d97f-", Namespace:"calico-system", SelfLink:"", UID:"91de5af7-c91f-4e46-b3e9-42f53f3c3734", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664b8d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"", Pod:"calico-apiserver-5664b8d97f-trlct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9554e5c73d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:09.785088 containerd[1457]: 2026-04-17 23:39:09.742 [INFO][4523] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.3/32] ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-trlct" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.785088 containerd[1457]: 2026-04-17 23:39:09.742 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9554e5c73d5 ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-trlct" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.785088 containerd[1457]: 2026-04-17 23:39:09.749 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-trlct" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.785088 containerd[1457]: 2026-04-17 23:39:09.750 [INFO][4523] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-trlct" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0", GenerateName:"calico-apiserver-5664b8d97f-", Namespace:"calico-system", SelfLink:"", UID:"91de5af7-c91f-4e46-b3e9-42f53f3c3734", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664b8d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e", Pod:"calico-apiserver-5664b8d97f-trlct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9554e5c73d5", MAC:"06:6a:55:a8:4f:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:09.785088 containerd[1457]: 2026-04-17 23:39:09.767 [INFO][4523] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-trlct" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:09.873620 systemd-networkd[1365]: cali8ffac2f3f2f: Link UP Apr 17 23:39:09.879899 systemd-networkd[1365]: cali8ffac2f3f2f: Gained carrier Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.529 [INFO][4533] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0 csi-node-driver- calico-system 52277689-f4f8-4eb4-acdf-589f30ebdb48 1012 0 2026-04-17 23:38:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1 csi-node-driver-zq7p4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8ffac2f3f2f [] [] }} ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Namespace="calico-system" Pod="csi-node-driver-zq7p4" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.529 [INFO][4533] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Namespace="calico-system" Pod="csi-node-driver-zq7p4" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.663 [INFO][4547] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" HandleID="k8s-pod-network.959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.686 [INFO][4547] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" HandleID="k8s-pod-network.959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", "pod":"csi-node-driver-zq7p4", "timestamp":"2026-04-17 23:39:09.663888605 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000306420)} Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.686 [INFO][4547] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.736 [INFO][4547] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.736 [INFO][4547] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.785 [INFO][4547] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.799 [INFO][4547] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.812 [INFO][4547] ipam/ipam.go 526: Trying affinity for 192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.816 [INFO][4547] ipam/ipam.go 160: Attempting to load block cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.821 [INFO][4547] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.821 [INFO][4547] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.825 [INFO][4547] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3 Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.834 [INFO][4547] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.846 [INFO][4547] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.95.4/26] block=192.168.95.0/26 handle="k8s-pod-network.959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.846 [INFO][4547] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.95.4/26] handle="k8s-pod-network.959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.846 [INFO][4547] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:09.921192 containerd[1457]: 2026-04-17 23:39:09.846 [INFO][4547] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.95.4/26] IPv6=[] ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" HandleID="k8s-pod-network.959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.922757 containerd[1457]: 2026-04-17 23:39:09.856 [INFO][4533] cni-plugin/k8s.go 418: Populated endpoint ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Namespace="calico-system" Pod="csi-node-driver-zq7p4" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52277689-f4f8-4eb4-acdf-589f30ebdb48", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"", Pod:"csi-node-driver-zq7p4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ffac2f3f2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:09.922757 containerd[1457]: 2026-04-17 23:39:09.857 [INFO][4533] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.4/32] ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Namespace="calico-system" Pod="csi-node-driver-zq7p4" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.922757 containerd[1457]: 2026-04-17 23:39:09.857 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ffac2f3f2f ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Namespace="calico-system" Pod="csi-node-driver-zq7p4" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.922757 containerd[1457]: 2026-04-17 23:39:09.880 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Namespace="calico-system" Pod="csi-node-driver-zq7p4" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.922757 containerd[1457]: 2026-04-17 23:39:09.881 [INFO][4533] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Namespace="calico-system" Pod="csi-node-driver-zq7p4" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52277689-f4f8-4eb4-acdf-589f30ebdb48", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3", Pod:"csi-node-driver-zq7p4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ffac2f3f2f", MAC:"a2:5f:91:5e:3b:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:09.922757 containerd[1457]: 2026-04-17 23:39:09.907 [INFO][4533] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3" Namespace="calico-system" Pod="csi-node-driver-zq7p4" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:09.933213 containerd[1457]: time="2026-04-17T23:39:09.922359841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:09.933213 containerd[1457]: time="2026-04-17T23:39:09.922499305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:09.933213 containerd[1457]: time="2026-04-17T23:39:09.922524203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:09.933213 containerd[1457]: time="2026-04-17T23:39:09.922677858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:09.980725 systemd[1]: Started cri-containerd-b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e.scope - libcontainer container b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e. Apr 17 23:39:10.014237 containerd[1457]: time="2026-04-17T23:39:10.010790053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:10.014598 containerd[1457]: time="2026-04-17T23:39:10.014519549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:10.014777 containerd[1457]: time="2026-04-17T23:39:10.014745255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:10.015209 containerd[1457]: time="2026-04-17T23:39:10.015154630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:10.065696 systemd[1]: Started cri-containerd-959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3.scope - libcontainer container 959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3. Apr 17 23:39:10.172684 containerd[1457]: time="2026-04-17T23:39:10.172623376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zq7p4,Uid:52277689-f4f8-4eb4-acdf-589f30ebdb48,Namespace:calico-system,Attempt:1,} returns sandbox id \"959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3\"" Apr 17 23:39:10.202865 containerd[1457]: time="2026-04-17T23:39:10.202817360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664b8d97f-trlct,Uid:91de5af7-c91f-4e46-b3e9-42f53f3c3734,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e\"" Apr 17 23:39:10.237427 containerd[1457]: time="2026-04-17T23:39:10.237381297Z" level=info msg="StopPodSandbox for \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\"" Apr 17 23:39:10.253771 containerd[1457]: time="2026-04-17T23:39:10.253611835Z" level=info msg="StopPodSandbox for \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\"" Apr 17 23:39:10.262610 containerd[1457]: time="2026-04-17T23:39:10.262565708Z" level=info msg="StopPodSandbox for \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\"" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.445 [INFO][4710] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.450 [INFO][4710] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" iface="eth0" netns="/var/run/netns/cni-8d5bb6ef-00f9-08e7-dacc-97b0fdaed552" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.451 [INFO][4710] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" iface="eth0" netns="/var/run/netns/cni-8d5bb6ef-00f9-08e7-dacc-97b0fdaed552" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.452 [INFO][4710] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" iface="eth0" netns="/var/run/netns/cni-8d5bb6ef-00f9-08e7-dacc-97b0fdaed552" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.452 [INFO][4710] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.452 [INFO][4710] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.519 [INFO][4725] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" HandleID="k8s-pod-network.303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.519 [INFO][4725] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.519 [INFO][4725] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.535 [WARNING][4725] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" HandleID="k8s-pod-network.303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.535 [INFO][4725] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" HandleID="k8s-pod-network.303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.538 [INFO][4725] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:10.549358 containerd[1457]: 2026-04-17 23:39:10.542 [INFO][4710] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:10.561876 systemd[1]: run-netns-cni\x2d8d5bb6ef\x2d00f9\x2d08e7\x2ddacc\x2d97b0fdaed552.mount: Deactivated successfully. Apr 17 23:39:10.563352 containerd[1457]: time="2026-04-17T23:39:10.563102604Z" level=info msg="TearDown network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\" successfully" Apr 17 23:39:10.563352 containerd[1457]: time="2026-04-17T23:39:10.563148653Z" level=info msg="StopPodSandbox for \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\" returns successfully" Apr 17 23:39:10.571069 containerd[1457]: time="2026-04-17T23:39:10.571027383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664b8d97f-bbdfw,Uid:7a296da7-7b92-4245-a4f5-9775c1f8a482,Namespace:calico-system,Attempt:1,}" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.447 [INFO][4703] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.451 [INFO][4703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" iface="eth0" netns="/var/run/netns/cni-11235b57-719b-bee9-c281-cc80e3bd66a1" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.451 [INFO][4703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" iface="eth0" netns="/var/run/netns/cni-11235b57-719b-bee9-c281-cc80e3bd66a1" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.452 [INFO][4703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" iface="eth0" netns="/var/run/netns/cni-11235b57-719b-bee9-c281-cc80e3bd66a1" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.452 [INFO][4703] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.452 [INFO][4703] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.553 [INFO][4727] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" HandleID="k8s-pod-network.bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.554 [INFO][4727] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.554 [INFO][4727] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.575 [WARNING][4727] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" HandleID="k8s-pod-network.bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.575 [INFO][4727] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" HandleID="k8s-pod-network.bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.577 [INFO][4727] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:10.584280 containerd[1457]: 2026-04-17 23:39:10.580 [INFO][4703] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:10.586812 containerd[1457]: time="2026-04-17T23:39:10.584549170Z" level=info msg="TearDown network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\" successfully" Apr 17 23:39:10.586812 containerd[1457]: time="2026-04-17T23:39:10.584583826Z" level=info msg="StopPodSandbox for \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\" returns successfully" Apr 17 23:39:10.594095 systemd[1]: run-netns-cni\x2d11235b57\x2d719b\x2dbee9\x2dc281\x2dcc80e3bd66a1.mount: Deactivated successfully. Apr 17 23:39:10.599928 containerd[1457]: time="2026-04-17T23:39:10.598340798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-pkztx,Uid:1c2398f0-91d9-434e-8477-385776513cc3,Namespace:calico-system,Attempt:1,}" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.482 [INFO][4709] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.483 [INFO][4709] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" iface="eth0" netns="/var/run/netns/cni-fef4fad5-5873-99d6-e4da-5404e9f5baa0" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.484 [INFO][4709] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" iface="eth0" netns="/var/run/netns/cni-fef4fad5-5873-99d6-e4da-5404e9f5baa0" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.485 [INFO][4709] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" iface="eth0" netns="/var/run/netns/cni-fef4fad5-5873-99d6-e4da-5404e9f5baa0" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.487 [INFO][4709] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.487 [INFO][4709] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.657 [INFO][4735] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" HandleID="k8s-pod-network.ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.662 [INFO][4735] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.664 [INFO][4735] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.711 [WARNING][4735] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" HandleID="k8s-pod-network.ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.711 [INFO][4735] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" HandleID="k8s-pod-network.ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.717 [INFO][4735] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:10.740739 containerd[1457]: 2026-04-17 23:39:10.732 [INFO][4709] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:10.742072 containerd[1457]: time="2026-04-17T23:39:10.741576026Z" level=info msg="TearDown network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\" successfully" Apr 17 23:39:10.742072 containerd[1457]: time="2026-04-17T23:39:10.741986138Z" level=info msg="StopPodSandbox for \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\" returns successfully" Apr 17 23:39:10.748846 containerd[1457]: time="2026-04-17T23:39:10.747750214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fkclr,Uid:21f2939a-2dbb-4eca-a507-d3f15555c474,Namespace:kube-system,Attempt:1,}" Apr 17 23:39:10.870347 systemd-networkd[1365]: cali9554e5c73d5: Gained IPv6LL Apr 17 23:39:11.067696 systemd-networkd[1365]: calidf101e38354: Link UP Apr 17 23:39:11.070395 systemd-networkd[1365]: calidf101e38354: Gained carrier Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:10.810 [INFO][4753] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0 calico-apiserver-5664b8d97f- calico-system 7a296da7-7b92-4245-a4f5-9775c1f8a482 1025 0 2026-04-17 23:38:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5664b8d97f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1 calico-apiserver-5664b8d97f-bbdfw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calidf101e38354 [] [] }} ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-bbdfw" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:10.811 [INFO][4753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-bbdfw" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:10.941 [INFO][4784] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" HandleID="k8s-pod-network.5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:10.963 [INFO][4784] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" HandleID="k8s-pod-network.5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", "pod":"calico-apiserver-5664b8d97f-bbdfw", "timestamp":"2026-04-17 23:39:10.941065892 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000192580)} Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:10.963 [INFO][4784] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:10.963 [INFO][4784] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:10.963 [INFO][4784] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:10.971 [INFO][4784] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:10.994 [INFO][4784] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.009 [INFO][4784] ipam/ipam.go 526: Trying affinity for 192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.014 [INFO][4784] ipam/ipam.go 160: Attempting to load block cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.019 [INFO][4784] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.019 [INFO][4784] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.022 [INFO][4784] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.032 [INFO][4784] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.048 [INFO][4784] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.95.5/26] block=192.168.95.0/26 handle="k8s-pod-network.5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.048 [INFO][4784] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.95.5/26] handle="k8s-pod-network.5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.048 [INFO][4784] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:11.122854 containerd[1457]: 2026-04-17 23:39:11.048 [INFO][4784] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.95.5/26] IPv6=[] ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" HandleID="k8s-pod-network.5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:11.125606 containerd[1457]: 2026-04-17 23:39:11.058 [INFO][4753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-bbdfw" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0", GenerateName:"calico-apiserver-5664b8d97f-", Namespace:"calico-system", SelfLink:"", UID:"7a296da7-7b92-4245-a4f5-9775c1f8a482", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664b8d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"", Pod:"calico-apiserver-5664b8d97f-bbdfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf101e38354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:11.125606 containerd[1457]: 2026-04-17 23:39:11.058 [INFO][4753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.5/32] ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-bbdfw" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:11.125606 containerd[1457]: 2026-04-17 23:39:11.058 [INFO][4753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf101e38354 ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-bbdfw" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:11.125606 containerd[1457]: 2026-04-17 23:39:11.074 [INFO][4753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-bbdfw" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:11.125606 containerd[1457]: 2026-04-17 23:39:11.077 [INFO][4753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-bbdfw" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0", GenerateName:"calico-apiserver-5664b8d97f-", Namespace:"calico-system", SelfLink:"", UID:"7a296da7-7b92-4245-a4f5-9775c1f8a482", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664b8d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c", Pod:"calico-apiserver-5664b8d97f-bbdfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf101e38354", MAC:"12:2e:f1:3a:ce:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:11.125606 containerd[1457]: 2026-04-17 23:39:11.112 [INFO][4753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c" Namespace="calico-system" Pod="calico-apiserver-5664b8d97f-bbdfw" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:11.189319 systemd-networkd[1365]: cali8ffac2f3f2f: Gained IPv6LL Apr 17 23:39:11.204835 systemd-networkd[1365]: cali5f8bca6e331: Link UP Apr 17 23:39:11.205222 systemd-networkd[1365]: cali5f8bca6e331: Gained carrier Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:10.823 [INFO][4758] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0 goldmane-9f7667bb8- calico-system 1c2398f0-91d9-434e-8477-385776513cc3 1026 0 2026-04-17 23:38:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1 goldmane-9f7667bb8-pkztx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5f8bca6e331 [] [] }} ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Namespace="calico-system" Pod="goldmane-9f7667bb8-pkztx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:10.823 [INFO][4758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Namespace="calico-system" Pod="goldmane-9f7667bb8-pkztx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:10.983 [INFO][4794] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" HandleID="k8s-pod-network.9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.002 [INFO][4794] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" HandleID="k8s-pod-network.9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004feb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", "pod":"goldmane-9f7667bb8-pkztx", "timestamp":"2026-04-17 23:39:10.98331252 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188f20)} Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.002 [INFO][4794] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.048 [INFO][4794] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.049 [INFO][4794] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.073 [INFO][4794] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.101 [INFO][4794] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.121 [INFO][4794] ipam/ipam.go 526: Trying affinity for 192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.128 [INFO][4794] ipam/ipam.go 160: Attempting to load block cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.134 [INFO][4794] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.134 [INFO][4794] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.138 [INFO][4794] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791 Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.153 [INFO][4794] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.181 [INFO][4794] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.95.6/26] block=192.168.95.0/26 handle="k8s-pod-network.9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.182 [INFO][4794] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.95.6/26] handle="k8s-pod-network.9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.182 [INFO][4794] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:11.248784 containerd[1457]: 2026-04-17 23:39:11.182 [INFO][4794] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.95.6/26] IPv6=[] ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" HandleID="k8s-pod-network.9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:11.252276 containerd[1457]: 2026-04-17 23:39:11.196 [INFO][4758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Namespace="calico-system" Pod="goldmane-9f7667bb8-pkztx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1c2398f0-91d9-434e-8477-385776513cc3", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"", Pod:"goldmane-9f7667bb8-pkztx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f8bca6e331", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:11.252276 containerd[1457]: 2026-04-17 23:39:11.196 [INFO][4758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.6/32] ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Namespace="calico-system" Pod="goldmane-9f7667bb8-pkztx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:11.252276 containerd[1457]: 2026-04-17 23:39:11.197 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f8bca6e331 ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Namespace="calico-system" Pod="goldmane-9f7667bb8-pkztx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:11.252276 containerd[1457]: 2026-04-17 23:39:11.204 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Namespace="calico-system" Pod="goldmane-9f7667bb8-pkztx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:11.252276 containerd[1457]: 2026-04-17 23:39:11.208 [INFO][4758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Namespace="calico-system" Pod="goldmane-9f7667bb8-pkztx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1c2398f0-91d9-434e-8477-385776513cc3", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791", Pod:"goldmane-9f7667bb8-pkztx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f8bca6e331", MAC:"d2:9d:93:2b:9e:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:11.252276 containerd[1457]: 2026-04-17 23:39:11.229 [INFO][4758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791" Namespace="calico-system" Pod="goldmane-9f7667bb8-pkztx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:11.296715 containerd[1457]: time="2026-04-17T23:39:11.295622803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:11.296715 containerd[1457]: time="2026-04-17T23:39:11.295709069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:11.296715 containerd[1457]: time="2026-04-17T23:39:11.295741137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:11.296715 containerd[1457]: time="2026-04-17T23:39:11.295910251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:11.376435 systemd[1]: run-netns-cni\x2dfef4fad5\x2d5873\x2d99d6\x2de4da\x2d5404e9f5baa0.mount: Deactivated successfully. Apr 17 23:39:11.380772 systemd-networkd[1365]: calib76af7edca0: Link UP Apr 17 23:39:11.399855 systemd-networkd[1365]: calib76af7edca0: Gained carrier Apr 17 23:39:11.402350 containerd[1457]: time="2026-04-17T23:39:11.401978459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:11.402851 containerd[1457]: time="2026-04-17T23:39:11.402802408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:11.403093 containerd[1457]: time="2026-04-17T23:39:11.403053435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:11.404677 systemd[1]: Started cri-containerd-5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c.scope - libcontainer container 5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c. Apr 17 23:39:11.412738 containerd[1457]: time="2026-04-17T23:39:11.408357717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.006 [INFO][4785] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0 coredns-7d764666f9- kube-system 21f2939a-2dbb-4eca-a507-d3f15555c474 1028 0 2026-04-17 23:38:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1 coredns-7d764666f9-fkclr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib76af7edca0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Namespace="kube-system" Pod="coredns-7d764666f9-fkclr" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.006 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Namespace="kube-system" Pod="coredns-7d764666f9-fkclr" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.155 [INFO][4815] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" HandleID="k8s-pod-network.ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.193 [INFO][4815] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" HandleID="k8s-pod-network.ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9470), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", "pod":"coredns-7d764666f9-fkclr", "timestamp":"2026-04-17 23:39:11.155555569 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001142c0)} Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.193 [INFO][4815] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.193 [INFO][4815] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.193 [INFO][4815] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.219 [INFO][4815] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.246 [INFO][4815] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.258 [INFO][4815] ipam/ipam.go 526: Trying affinity for 192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.263 [INFO][4815] ipam/ipam.go 160: Attempting to load block cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.271 [INFO][4815] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.272 [INFO][4815] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.278 [INFO][4815] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.289 [INFO][4815] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.310 [INFO][4815] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.95.7/26] block=192.168.95.0/26 handle="k8s-pod-network.ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.312 [INFO][4815] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.95.7/26] handle="k8s-pod-network.ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.313 [INFO][4815] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:11.464793 containerd[1457]: 2026-04-17 23:39:11.315 [INFO][4815] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.95.7/26] IPv6=[] ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" HandleID="k8s-pod-network.ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:11.467315 containerd[1457]: 2026-04-17 23:39:11.343 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Namespace="kube-system" Pod="coredns-7d764666f9-fkclr" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"21f2939a-2dbb-4eca-a507-d3f15555c474", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"", Pod:"coredns-7d764666f9-fkclr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib76af7edca0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:11.467315 containerd[1457]: 2026-04-17 23:39:11.344 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.7/32] ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Namespace="kube-system" Pod="coredns-7d764666f9-fkclr" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:11.467315 containerd[1457]: 2026-04-17 23:39:11.347 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib76af7edca0 ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Namespace="kube-system" Pod="coredns-7d764666f9-fkclr" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:11.467315 containerd[1457]: 2026-04-17 23:39:11.404 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Namespace="kube-system" Pod="coredns-7d764666f9-fkclr" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:11.468745 containerd[1457]: 2026-04-17 23:39:11.409 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Namespace="kube-system" Pod="coredns-7d764666f9-fkclr" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"21f2939a-2dbb-4eca-a507-d3f15555c474", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b", Pod:"coredns-7d764666f9-fkclr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib76af7edca0", MAC:"b2:47:bc:8d:9e:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:11.468745 containerd[1457]: 2026-04-17 23:39:11.458 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b" Namespace="kube-system" Pod="coredns-7d764666f9-fkclr" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:11.511126 systemd[1]: Started cri-containerd-9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791.scope - libcontainer container 9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791. Apr 17 23:39:11.607632 containerd[1457]: time="2026-04-17T23:39:11.606028230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:11.607632 containerd[1457]: time="2026-04-17T23:39:11.606117596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:11.607632 containerd[1457]: time="2026-04-17T23:39:11.606146651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:11.607632 containerd[1457]: time="2026-04-17T23:39:11.606375793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:11.681024 systemd[1]: Started cri-containerd-ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b.scope - libcontainer container ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b. Apr 17 23:39:11.708992 containerd[1457]: time="2026-04-17T23:39:11.708161532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664b8d97f-bbdfw,Uid:7a296da7-7b92-4245-a4f5-9775c1f8a482,Namespace:calico-system,Attempt:1,} returns sandbox id \"5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c\"" Apr 17 23:39:11.746905 containerd[1457]: time="2026-04-17T23:39:11.746736116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-pkztx,Uid:1c2398f0-91d9-434e-8477-385776513cc3,Namespace:calico-system,Attempt:1,} returns sandbox id \"9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791\"" Apr 17 23:39:11.809428 containerd[1457]: time="2026-04-17T23:39:11.809267348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fkclr,Uid:21f2939a-2dbb-4eca-a507-d3f15555c474,Namespace:kube-system,Attempt:1,} returns sandbox id \"ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b\"" Apr 17 23:39:11.821549 containerd[1457]: time="2026-04-17T23:39:11.820810879Z" level=info msg="CreateContainer within sandbox \"ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:39:11.844895 containerd[1457]: time="2026-04-17T23:39:11.844686938Z" level=info msg="CreateContainer within sandbox \"ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d809312ba83d6105e30edb1310cb22af03214314dc0ebd0e59abdcf4570e0bc\"" Apr 17 23:39:11.847470 containerd[1457]: time="2026-04-17T23:39:11.847299461Z" level=info msg="StartContainer for \"3d809312ba83d6105e30edb1310cb22af03214314dc0ebd0e59abdcf4570e0bc\"" Apr 17 23:39:11.927930 systemd[1]: Started cri-containerd-3d809312ba83d6105e30edb1310cb22af03214314dc0ebd0e59abdcf4570e0bc.scope - libcontainer container 3d809312ba83d6105e30edb1310cb22af03214314dc0ebd0e59abdcf4570e0bc. Apr 17 23:39:11.991662 containerd[1457]: time="2026-04-17T23:39:11.990941254Z" level=info msg="StartContainer for \"3d809312ba83d6105e30edb1310cb22af03214314dc0ebd0e59abdcf4570e0bc\" returns successfully" Apr 17 23:39:12.201592 containerd[1457]: time="2026-04-17T23:39:12.201028757Z" level=info msg="StopPodSandbox for \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\"" Apr 17 23:39:12.241109 containerd[1457]: time="2026-04-17T23:39:12.240958736Z" level=info msg="StopPodSandbox for \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\"" Apr 17 23:39:12.341869 systemd-networkd[1365]: calidf101e38354: Gained IPv6LL Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.329 [WARNING][5032] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1c2398f0-91d9-434e-8477-385776513cc3", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791", Pod:"goldmane-9f7667bb8-pkztx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f8bca6e331", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.329 [INFO][5032] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.329 [INFO][5032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" iface="eth0" netns="" Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.329 [INFO][5032] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.329 [INFO][5032] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.453 [INFO][5056] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" HandleID="k8s-pod-network.bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.454 [INFO][5056] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.454 [INFO][5056] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.472 [WARNING][5056] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" HandleID="k8s-pod-network.bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.473 [INFO][5056] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" HandleID="k8s-pod-network.bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.480 [INFO][5056] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:12.489184 containerd[1457]: 2026-04-17 23:39:12.485 [INFO][5032] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:12.489184 containerd[1457]: time="2026-04-17T23:39:12.488569806Z" level=info msg="TearDown network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\" successfully" Apr 17 23:39:12.489184 containerd[1457]: time="2026-04-17T23:39:12.488607109Z" level=info msg="StopPodSandbox for \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\" returns successfully" Apr 17 23:39:12.490498 containerd[1457]: time="2026-04-17T23:39:12.489786820Z" level=info msg="RemovePodSandbox for \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\"" Apr 17 23:39:12.490498 containerd[1457]: time="2026-04-17T23:39:12.489827065Z" level=info msg="Forcibly stopping sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\"" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.448 [INFO][5048] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.450 [INFO][5048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" iface="eth0" netns="/var/run/netns/cni-68ee1588-1df8-aa41-2a77-5271bcd2dd7c" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.450 [INFO][5048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" iface="eth0" netns="/var/run/netns/cni-68ee1588-1df8-aa41-2a77-5271bcd2dd7c" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.451 [INFO][5048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" iface="eth0" netns="/var/run/netns/cni-68ee1588-1df8-aa41-2a77-5271bcd2dd7c" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.451 [INFO][5048] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.451 [INFO][5048] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.548 [INFO][5067] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" HandleID="k8s-pod-network.713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.549 [INFO][5067] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.549 [INFO][5067] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.574 [WARNING][5067] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" HandleID="k8s-pod-network.713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.574 [INFO][5067] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" HandleID="k8s-pod-network.713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.578 [INFO][5067] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:12.587306 containerd[1457]: 2026-04-17 23:39:12.583 [INFO][5048] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:39:12.587306 containerd[1457]: time="2026-04-17T23:39:12.586675372Z" level=info msg="TearDown network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\" successfully" Apr 17 23:39:12.587306 containerd[1457]: time="2026-04-17T23:39:12.586712509Z" level=info msg="StopPodSandbox for \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\" returns successfully" Apr 17 23:39:12.594838 systemd[1]: run-netns-cni\x2d68ee1588\x2d1df8\x2daa41\x2d2a77\x2d5271bcd2dd7c.mount: Deactivated successfully. Apr 17 23:39:12.601210 containerd[1457]: time="2026-04-17T23:39:12.600349007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c6pfx,Uid:6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f,Namespace:kube-system,Attempt:1,}" Apr 17 23:39:12.799656 kubelet[2578]: I0417 23:39:12.799374 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-fkclr" podStartSLOduration=53.799349524 podStartE2EDuration="53.799349524s" podCreationTimestamp="2026-04-17 23:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:12.761595929 +0000 UTC m=+60.733595818" watchObservedRunningTime="2026-04-17 23:39:12.799349524 +0000 UTC m=+60.771349393" Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.677 [WARNING][5080] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1c2398f0-91d9-434e-8477-385776513cc3", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791", Pod:"goldmane-9f7667bb8-pkztx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f8bca6e331", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.678 [INFO][5080] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.679 [INFO][5080] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" iface="eth0" netns="" Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.679 [INFO][5080] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.679 [INFO][5080] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.758 [INFO][5100] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" HandleID="k8s-pod-network.bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.758 [INFO][5100] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.758 [INFO][5100] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.786 [WARNING][5100] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" HandleID="k8s-pod-network.bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.786 [INFO][5100] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" HandleID="k8s-pod-network.bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-goldmane--9f7667bb8--pkztx-eth0" Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.791 [INFO][5100] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:12.810836 containerd[1457]: 2026-04-17 23:39:12.807 [INFO][5080] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459" Apr 17 23:39:12.810836 containerd[1457]: time="2026-04-17T23:39:12.810431487Z" level=info msg="TearDown network for sandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\" successfully" Apr 17 23:39:12.821750 containerd[1457]: time="2026-04-17T23:39:12.821002997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:39:12.821750 containerd[1457]: time="2026-04-17T23:39:12.821099759Z" level=info msg="RemovePodSandbox \"bbb401199fdb9887f2e43be2dd54fcad22188040c708f4cfcc2611bf58c62459\" returns successfully" Apr 17 23:39:12.822745 containerd[1457]: time="2026-04-17T23:39:12.821876898Z" level=info msg="StopPodSandbox for \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\"" Apr 17 23:39:12.852918 systemd-networkd[1365]: cali5f8bca6e331: Gained IPv6LL Apr 17 23:39:13.070864 systemd-networkd[1365]: cali20580221bcb: Link UP Apr 17 23:39:13.073861 systemd-networkd[1365]: cali20580221bcb: Gained carrier Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:12.794 [INFO][5089] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0 coredns-7d764666f9- kube-system 6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f 1049 0 2026-04-17 23:38:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1 coredns-7d764666f9-c6pfx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali20580221bcb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Namespace="kube-system" Pod="coredns-7d764666f9-c6pfx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:12.794 [INFO][5089] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Namespace="kube-system" Pod="coredns-7d764666f9-c6pfx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:12.956 [INFO][5117] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" HandleID="k8s-pod-network.fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:12.977 [INFO][5117] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" HandleID="k8s-pod-network.fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000642010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", "pod":"coredns-7d764666f9-c6pfx", "timestamp":"2026-04-17 23:39:12.95619306 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00018adc0)} Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:12.977 [INFO][5117] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:12.978 [INFO][5117] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:12.978 [INFO][5117] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1' Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:12.983 [INFO][5117] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:12.991 [INFO][5117] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.002 [INFO][5117] ipam/ipam.go 526: Trying affinity for 192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.008 [INFO][5117] ipam/ipam.go 160: Attempting to load block cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.014 [INFO][5117] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.95.0/26 host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.015 [INFO][5117] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.95.0/26 handle="k8s-pod-network.fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.018 [INFO][5117] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3 Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.029 [INFO][5117] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.95.0/26 handle="k8s-pod-network.fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.046 [INFO][5117] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.95.8/26] block=192.168.95.0/26 handle="k8s-pod-network.fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.046 [INFO][5117] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.95.8/26] handle="k8s-pod-network.fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" host="ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1" Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.046 [INFO][5117] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:13.119838 containerd[1457]: 2026-04-17 23:39:13.046 [INFO][5117] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.95.8/26] IPv6=[] ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" HandleID="k8s-pod-network.fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:13.121023 containerd[1457]: 2026-04-17 23:39:13.055 [INFO][5089] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Namespace="kube-system" Pod="coredns-7d764666f9-c6pfx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"", Pod:"coredns-7d764666f9-c6pfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20580221bcb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:13.121023 containerd[1457]: 2026-04-17 23:39:13.057 [INFO][5089] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.8/32] ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Namespace="kube-system" Pod="coredns-7d764666f9-c6pfx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:13.121023 containerd[1457]: 2026-04-17 23:39:13.057 [INFO][5089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20580221bcb ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Namespace="kube-system" Pod="coredns-7d764666f9-c6pfx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:13.121023 containerd[1457]: 2026-04-17 23:39:13.080 [INFO][5089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Namespace="kube-system" Pod="coredns-7d764666f9-c6pfx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:13.124701 containerd[1457]: 2026-04-17 23:39:13.086 [INFO][5089] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Namespace="kube-system" Pod="coredns-7d764666f9-c6pfx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3", Pod:"coredns-7d764666f9-c6pfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20580221bcb", MAC:"be:75:6c:81:b8:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:13.124701 containerd[1457]: 2026-04-17 23:39:13.111 [INFO][5089] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3" Namespace="kube-system" Pod="coredns-7d764666f9-c6pfx" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.027 [WARNING][5121] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0", GenerateName:"calico-apiserver-5664b8d97f-", Namespace:"calico-system", SelfLink:"", UID:"91de5af7-c91f-4e46-b3e9-42f53f3c3734", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664b8d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e", Pod:"calico-apiserver-5664b8d97f-trlct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9554e5c73d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.028 [INFO][5121] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.028 [INFO][5121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" iface="eth0" netns="" Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.028 [INFO][5121] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.028 [INFO][5121] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.118 [INFO][5137] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" HandleID="k8s-pod-network.279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.118 [INFO][5137] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.122 [INFO][5137] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.138 [WARNING][5137] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" HandleID="k8s-pod-network.279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.138 [INFO][5137] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" HandleID="k8s-pod-network.279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.142 [INFO][5137] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:13.160540 containerd[1457]: 2026-04-17 23:39:13.153 [INFO][5121] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:13.160540 containerd[1457]: time="2026-04-17T23:39:13.160128409Z" level=info msg="TearDown network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\" successfully" Apr 17 23:39:13.160540 containerd[1457]: time="2026-04-17T23:39:13.160166672Z" level=info msg="StopPodSandbox for \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\" returns successfully" Apr 17 23:39:13.163151 containerd[1457]: time="2026-04-17T23:39:13.162977792Z" level=info msg="RemovePodSandbox for \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\"" Apr 17 23:39:13.163151 containerd[1457]: time="2026-04-17T23:39:13.163082747Z" level=info msg="Forcibly stopping sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\"" Apr 17 23:39:13.172916 systemd-networkd[1365]: calib76af7edca0: Gained IPv6LL Apr 17 23:39:13.241222 containerd[1457]: time="2026-04-17T23:39:13.241106474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:13.242706 containerd[1457]: time="2026-04-17T23:39:13.241204270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:13.242706 containerd[1457]: time="2026-04-17T23:39:13.241228474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:13.242706 containerd[1457]: time="2026-04-17T23:39:13.241347793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:13.338106 systemd[1]: Started cri-containerd-fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3.scope - libcontainer container fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3. Apr 17 23:39:13.453191 containerd[1457]: time="2026-04-17T23:39:13.453134370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c6pfx,Uid:6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f,Namespace:kube-system,Attempt:1,} returns sandbox id \"fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3\"" Apr 17 23:39:13.468486 containerd[1457]: time="2026-04-17T23:39:13.468409230Z" level=info msg="CreateContainer within sandbox \"fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:39:13.511541 containerd[1457]: time="2026-04-17T23:39:13.511445343Z" level=info msg="CreateContainer within sandbox \"fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bbf5b0461619ca667d0a12e3150145c36ec2c775dd1224652429c4e874d4ad9\"" Apr 17 23:39:13.513334 containerd[1457]: time="2026-04-17T23:39:13.513164355Z" level=info msg="StartContainer for \"6bbf5b0461619ca667d0a12e3150145c36ec2c775dd1224652429c4e874d4ad9\"" Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.378 [WARNING][5173] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0", GenerateName:"calico-apiserver-5664b8d97f-", Namespace:"calico-system", SelfLink:"", UID:"91de5af7-c91f-4e46-b3e9-42f53f3c3734", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664b8d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e", Pod:"calico-apiserver-5664b8d97f-trlct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9554e5c73d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.380 [INFO][5173] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.380 [INFO][5173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" iface="eth0" netns="" Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.380 [INFO][5173] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.380 [INFO][5173] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.480 [INFO][5212] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" HandleID="k8s-pod-network.279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.481 [INFO][5212] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.481 [INFO][5212] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.515 [WARNING][5212] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" HandleID="k8s-pod-network.279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.515 [INFO][5212] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" HandleID="k8s-pod-network.279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--trlct-eth0" Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.522 [INFO][5212] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:13.551644 containerd[1457]: 2026-04-17 23:39:13.537 [INFO][5173] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248" Apr 17 23:39:13.552442 containerd[1457]: time="2026-04-17T23:39:13.551685152Z" level=info msg="TearDown network for sandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\" successfully" Apr 17 23:39:13.568135 containerd[1457]: time="2026-04-17T23:39:13.568036637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:39:13.568286 containerd[1457]: time="2026-04-17T23:39:13.568190744Z" level=info msg="RemovePodSandbox \"279036416e38140aed94255d5461914ea579a4e579c3f910db3c0f4e9b86e248\" returns successfully" Apr 17 23:39:13.569200 containerd[1457]: time="2026-04-17T23:39:13.569114062Z" level=info msg="StopPodSandbox for \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\"" Apr 17 23:39:13.608966 systemd[1]: Started cri-containerd-6bbf5b0461619ca667d0a12e3150145c36ec2c775dd1224652429c4e874d4ad9.scope - libcontainer container 6bbf5b0461619ca667d0a12e3150145c36ec2c775dd1224652429c4e874d4ad9. Apr 17 23:39:13.729296 containerd[1457]: time="2026-04-17T23:39:13.726986445Z" level=info msg="StartContainer for \"6bbf5b0461619ca667d0a12e3150145c36ec2c775dd1224652429c4e874d4ad9\" returns successfully" Apr 17 23:39:13.791553 kubelet[2578]: I0417 23:39:13.791147 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-c6pfx" podStartSLOduration=54.791123917 podStartE2EDuration="54.791123917s" podCreationTimestamp="2026-04-17 23:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:13.78908323 +0000 UTC m=+61.761083094" watchObservedRunningTime="2026-04-17 23:39:13.791123917 +0000 UTC m=+61.763123783" Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.684 [WARNING][5251] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"21f2939a-2dbb-4eca-a507-d3f15555c474", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b", Pod:"coredns-7d764666f9-fkclr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib76af7edca0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.685 [INFO][5251] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.685 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" iface="eth0" netns="" Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.685 [INFO][5251] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.685 [INFO][5251] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.817 [INFO][5266] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" HandleID="k8s-pod-network.ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.819 [INFO][5266] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.819 [INFO][5266] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.843 [WARNING][5266] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" HandleID="k8s-pod-network.ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.844 [INFO][5266] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" HandleID="k8s-pod-network.ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.866 [INFO][5266] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:13.884346 containerd[1457]: 2026-04-17 23:39:13.878 [INFO][5251] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:13.884346 containerd[1457]: time="2026-04-17T23:39:13.884141420Z" level=info msg="TearDown network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\" successfully" Apr 17 23:39:13.884346 containerd[1457]: time="2026-04-17T23:39:13.884177664Z" level=info msg="StopPodSandbox for \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\" returns successfully" Apr 17 23:39:13.886705 containerd[1457]: time="2026-04-17T23:39:13.885961962Z" level=info msg="RemovePodSandbox for \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\"" Apr 17 23:39:13.886705 containerd[1457]: time="2026-04-17T23:39:13.886299859Z" level=info msg="Forcibly stopping sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\"" Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:13.978 [WARNING][5294] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"21f2939a-2dbb-4eca-a507-d3f15555c474", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"ed02c464e975a2d6a45ef1227da053ea0928963d37828be411aa361c51c9e19b", Pod:"coredns-7d764666f9-fkclr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib76af7edca0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:13.979 [INFO][5294] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:13.979 [INFO][5294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" iface="eth0" netns="" Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:13.979 [INFO][5294] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:13.979 [INFO][5294] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:14.026 [INFO][5304] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" HandleID="k8s-pod-network.ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:14.026 [INFO][5304] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:14.026 [INFO][5304] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:14.039 [WARNING][5304] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" HandleID="k8s-pod-network.ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:14.039 [INFO][5304] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" HandleID="k8s-pod-network.ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--fkclr-eth0" Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:14.042 [INFO][5304] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:14.047627 containerd[1457]: 2026-04-17 23:39:14.045 [INFO][5294] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c" Apr 17 23:39:14.047627 containerd[1457]: time="2026-04-17T23:39:14.047584869Z" level=info msg="TearDown network for sandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\" successfully" Apr 17 23:39:14.054943 containerd[1457]: time="2026-04-17T23:39:14.054639677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:39:14.056563 containerd[1457]: time="2026-04-17T23:39:14.054799566Z" level=info msg="RemovePodSandbox \"ba1e2c0dbd2df1e960d6ba05bd1fc2334ae567105f7d0bef394a7709a278286c\" returns successfully" Apr 17 23:39:14.057164 containerd[1457]: time="2026-04-17T23:39:14.057129772Z" level=info msg="StopPodSandbox for \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\"" Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.126 [WARNING][5318] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0", GenerateName:"calico-apiserver-5664b8d97f-", Namespace:"calico-system", SelfLink:"", UID:"7a296da7-7b92-4245-a4f5-9775c1f8a482", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664b8d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c", Pod:"calico-apiserver-5664b8d97f-bbdfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf101e38354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.126 [INFO][5318] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.126 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" iface="eth0" netns="" Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.126 [INFO][5318] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.126 [INFO][5318] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.171 [INFO][5325] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" HandleID="k8s-pod-network.303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.172 [INFO][5325] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.172 [INFO][5325] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.185 [WARNING][5325] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" HandleID="k8s-pod-network.303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.185 [INFO][5325] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" HandleID="k8s-pod-network.303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.188 [INFO][5325] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:14.193653 containerd[1457]: 2026-04-17 23:39:14.190 [INFO][5318] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:14.194830 containerd[1457]: time="2026-04-17T23:39:14.193749013Z" level=info msg="TearDown network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\" successfully" Apr 17 23:39:14.194830 containerd[1457]: time="2026-04-17T23:39:14.193809808Z" level=info msg="StopPodSandbox for \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\" returns successfully" Apr 17 23:39:14.194830 containerd[1457]: time="2026-04-17T23:39:14.194688183Z" level=info msg="RemovePodSandbox for \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\"" Apr 17 23:39:14.194830 containerd[1457]: time="2026-04-17T23:39:14.194784895Z" level=info msg="Forcibly stopping sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\"" Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.281 [WARNING][5339] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0", GenerateName:"calico-apiserver-5664b8d97f-", Namespace:"calico-system", SelfLink:"", UID:"7a296da7-7b92-4245-a4f5-9775c1f8a482", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664b8d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c", Pod:"calico-apiserver-5664b8d97f-bbdfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf101e38354", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.282 [INFO][5339] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.282 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" iface="eth0" netns="" Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.282 [INFO][5339] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.282 [INFO][5339] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.331 [INFO][5346] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" HandleID="k8s-pod-network.303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.332 [INFO][5346] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.332 [INFO][5346] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.347 [WARNING][5346] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" HandleID="k8s-pod-network.303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.347 [INFO][5346] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" HandleID="k8s-pod-network.303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--apiserver--5664b8d97f--bbdfw-eth0" Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.349 [INFO][5346] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:14.358550 containerd[1457]: 2026-04-17 23:39:14.354 [INFO][5339] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560" Apr 17 23:39:14.358550 containerd[1457]: time="2026-04-17T23:39:14.356596416Z" level=info msg="TearDown network for sandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\" successfully" Apr 17 23:39:14.367550 containerd[1457]: time="2026-04-17T23:39:14.367483470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:39:14.367550 containerd[1457]: time="2026-04-17T23:39:14.367572979Z" level=info msg="RemovePodSandbox \"303bf9327e03a9492b12a00d7cb87b708dbe2d15eddbec3baea24c3b40822560\" returns successfully" Apr 17 23:39:14.368909 containerd[1457]: time="2026-04-17T23:39:14.368585755Z" level=info msg="StopPodSandbox for \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\"" Apr 17 23:39:14.410489 containerd[1457]: time="2026-04-17T23:39:14.410235295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:14.417270 containerd[1457]: time="2026-04-17T23:39:14.417192057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:39:14.419325 containerd[1457]: time="2026-04-17T23:39:14.419258481Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:14.424346 containerd[1457]: time="2026-04-17T23:39:14.424296294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:14.427183 containerd[1457]: time="2026-04-17T23:39:14.427139316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.678583901s" Apr 17 23:39:14.427329 containerd[1457]: time="2026-04-17T23:39:14.427297625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:39:14.432506 containerd[1457]: time="2026-04-17T23:39:14.430042828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:39:14.466026 containerd[1457]: time="2026-04-17T23:39:14.465954681Z" level=info msg="CreateContainer within sandbox \"a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:39:14.487865 containerd[1457]: time="2026-04-17T23:39:14.487149306Z" level=info msg="CreateContainer within sandbox \"a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4a413a95eb3d0ade8327dc87aa360fd29a252298b5fbf2ffeb3924d84bd266a5\"" Apr 17 23:39:14.488717 containerd[1457]: time="2026-04-17T23:39:14.488653814Z" level=info msg="StartContainer for \"4a413a95eb3d0ade8327dc87aa360fd29a252298b5fbf2ffeb3924d84bd266a5\"" Apr 17 23:39:14.554893 systemd[1]: Started cri-containerd-4a413a95eb3d0ade8327dc87aa360fd29a252298b5fbf2ffeb3924d84bd266a5.scope - libcontainer container 4a413a95eb3d0ade8327dc87aa360fd29a252298b5fbf2ffeb3924d84bd266a5. Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.450 [WARNING][5360] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.463 [INFO][5360] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.463 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" iface="eth0" netns="" Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.463 [INFO][5360] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.464 [INFO][5360] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.540 [INFO][5369] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" HandleID="k8s-pod-network.d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.540 [INFO][5369] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.540 [INFO][5369] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.562 [WARNING][5369] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" HandleID="k8s-pod-network.d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.562 [INFO][5369] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" HandleID="k8s-pod-network.d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.565 [INFO][5369] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:14.574690 containerd[1457]: 2026-04-17 23:39:14.567 [INFO][5360] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:39:14.576002 containerd[1457]: time="2026-04-17T23:39:14.575963068Z" level=info msg="TearDown network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\" successfully" Apr 17 23:39:14.576600 containerd[1457]: time="2026-04-17T23:39:14.576096624Z" level=info msg="StopPodSandbox for \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\" returns successfully" Apr 17 23:39:14.577710 containerd[1457]: time="2026-04-17T23:39:14.577222132Z" level=info msg="RemovePodSandbox for \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\"" Apr 17 23:39:14.577710 containerd[1457]: time="2026-04-17T23:39:14.577282544Z" level=info msg="Forcibly stopping sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\"" Apr 17 23:39:14.659212 containerd[1457]: time="2026-04-17T23:39:14.659134513Z" level=info msg="StartContainer for \"4a413a95eb3d0ade8327dc87aa360fd29a252298b5fbf2ffeb3924d84bd266a5\" returns successfully" Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.658 [WARNING][5407] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.658 [INFO][5407] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.659 [INFO][5407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" iface="eth0" netns="" Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.659 [INFO][5407] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.659 [INFO][5407] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.715 [INFO][5423] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" HandleID="k8s-pod-network.d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.716 [INFO][5423] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.716 [INFO][5423] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.731 [WARNING][5423] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" HandleID="k8s-pod-network.d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.733 [INFO][5423] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" HandleID="k8s-pod-network.d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-whisker--5ccff6d658--s9fnj-eth0" Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.736 [INFO][5423] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:14.741549 containerd[1457]: 2026-04-17 23:39:14.739 [INFO][5407] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7" Apr 17 23:39:14.742260 containerd[1457]: time="2026-04-17T23:39:14.741633197Z" level=info msg="TearDown network for sandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\" successfully" Apr 17 23:39:14.751525 containerd[1457]: time="2026-04-17T23:39:14.750218312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:39:14.751525 containerd[1457]: time="2026-04-17T23:39:14.750312737Z" level=info msg="RemovePodSandbox \"d9034e0c46cfa3d202ab5fc1c3376ed6ec05a528f5e1f3e2aebfeb00ac6cd0e7\" returns successfully" Apr 17 23:39:14.751525 containerd[1457]: time="2026-04-17T23:39:14.751243988Z" level=info msg="StopPodSandbox for \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\"" Apr 17 23:39:14.828723 kubelet[2578]: I0417 23:39:14.828528 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f7d986d88-pzk8v" podStartSLOduration=33.146611075 podStartE2EDuration="38.828503329s" podCreationTimestamp="2026-04-17 23:38:36 +0000 UTC" firstStartedPulling="2026-04-17 23:39:08.747371307 +0000 UTC m=+56.719371146" lastFinishedPulling="2026-04-17 23:39:14.429263539 +0000 UTC m=+62.401263400" observedRunningTime="2026-04-17 23:39:14.826839436 +0000 UTC m=+62.798839304" watchObservedRunningTime="2026-04-17 23:39:14.828503329 +0000 UTC m=+62.800503224" Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.871 [WARNING][5445] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52277689-f4f8-4eb4-acdf-589f30ebdb48", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3", Pod:"csi-node-driver-zq7p4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ffac2f3f2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.872 [INFO][5445] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.872 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" iface="eth0" netns="" Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.873 [INFO][5445] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.873 [INFO][5445] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.935 [INFO][5463] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" HandleID="k8s-pod-network.f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.937 [INFO][5463] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.940 [INFO][5463] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.959 [WARNING][5463] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" HandleID="k8s-pod-network.f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.959 [INFO][5463] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" HandleID="k8s-pod-network.f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.963 [INFO][5463] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:14.976606 containerd[1457]: 2026-04-17 23:39:14.970 [INFO][5445] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:14.976606 containerd[1457]: time="2026-04-17T23:39:14.975869688Z" level=info msg="TearDown network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\" successfully" Apr 17 23:39:14.976606 containerd[1457]: time="2026-04-17T23:39:14.975919172Z" level=info msg="StopPodSandbox for \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\" returns successfully" Apr 17 23:39:14.981240 containerd[1457]: time="2026-04-17T23:39:14.979693619Z" level=info msg="RemovePodSandbox for \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\"" Apr 17 23:39:14.981240 containerd[1457]: time="2026-04-17T23:39:14.979734292Z" level=info msg="Forcibly stopping sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\"" Apr 17 23:39:15.030107 systemd-networkd[1365]: cali20580221bcb: Gained IPv6LL Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.065 [WARNING][5484] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52277689-f4f8-4eb4-acdf-589f30ebdb48", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3", Pod:"csi-node-driver-zq7p4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ffac2f3f2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.066 [INFO][5484] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.066 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" iface="eth0" netns="" Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.066 [INFO][5484] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.066 [INFO][5484] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.111 [INFO][5495] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" HandleID="k8s-pod-network.f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.119 [INFO][5495] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.119 [INFO][5495] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.134 [WARNING][5495] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" HandleID="k8s-pod-network.f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.134 [INFO][5495] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" HandleID="k8s-pod-network.f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-csi--node--driver--zq7p4-eth0" Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.136 [INFO][5495] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:15.143548 containerd[1457]: 2026-04-17 23:39:15.139 [INFO][5484] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530" Apr 17 23:39:15.143548 containerd[1457]: time="2026-04-17T23:39:15.142670727Z" level=info msg="TearDown network for sandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\" successfully" Apr 17 23:39:15.153682 containerd[1457]: time="2026-04-17T23:39:15.153577763Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:39:15.153834 containerd[1457]: time="2026-04-17T23:39:15.153688148Z" level=info msg="RemovePodSandbox \"f7da2ccd9e799850df546ab4507e5c0503830469a1c7c183c2c97db8db321530\" returns successfully" Apr 17 23:39:15.156099 containerd[1457]: time="2026-04-17T23:39:15.156056846Z" level=info msg="StopPodSandbox for \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\"" Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.219 [WARNING][5514] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0", GenerateName:"calico-kube-controllers-5f7d986d88-", Namespace:"calico-system", SelfLink:"", UID:"41734535-6436-46fc-9937-84a76aab1f06", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7d986d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678", Pod:"calico-kube-controllers-5f7d986d88-pzk8v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b4e128b214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.220 [INFO][5514] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.220 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" iface="eth0" netns="" Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.220 [INFO][5514] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.220 [INFO][5514] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.252 [INFO][5521] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" HandleID="k8s-pod-network.55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.253 [INFO][5521] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.253 [INFO][5521] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.263 [WARNING][5521] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" HandleID="k8s-pod-network.55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.264 [INFO][5521] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" HandleID="k8s-pod-network.55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.266 [INFO][5521] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:15.272107 containerd[1457]: 2026-04-17 23:39:15.268 [INFO][5514] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:15.272107 containerd[1457]: time="2026-04-17T23:39:15.271853486Z" level=info msg="TearDown network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\" successfully" Apr 17 23:39:15.272107 containerd[1457]: time="2026-04-17T23:39:15.271890838Z" level=info msg="StopPodSandbox for \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\" returns successfully" Apr 17 23:39:15.274387 containerd[1457]: time="2026-04-17T23:39:15.273999502Z" level=info msg="RemovePodSandbox for \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\"" Apr 17 23:39:15.274387 containerd[1457]: time="2026-04-17T23:39:15.274157047Z" level=info msg="Forcibly stopping sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\"" Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.355 [WARNING][5536] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0", GenerateName:"calico-kube-controllers-5f7d986d88-", Namespace:"calico-system", SelfLink:"", UID:"41734535-6436-46fc-9937-84a76aab1f06", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7d986d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"a0d4cdb589c4151666e1f22944c4ec6a97e588bcc701520921bb981fa3d1c678", Pod:"calico-kube-controllers-5f7d986d88-pzk8v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b4e128b214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.355 [INFO][5536] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.355 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" iface="eth0" netns="" Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.355 [INFO][5536] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.355 [INFO][5536] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.395 [INFO][5543] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" HandleID="k8s-pod-network.55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.395 [INFO][5543] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.395 [INFO][5543] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.411 [WARNING][5543] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" HandleID="k8s-pod-network.55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.412 [INFO][5543] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" HandleID="k8s-pod-network.55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-calico--kube--controllers--5f7d986d88--pzk8v-eth0" Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.417 [INFO][5543] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:39:15.424260 containerd[1457]: 2026-04-17 23:39:15.419 [INFO][5536] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552" Apr 17 23:39:15.424260 containerd[1457]: time="2026-04-17T23:39:15.423431037Z" level=info msg="TearDown network for sandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\" successfully" Apr 17 23:39:15.431858 containerd[1457]: time="2026-04-17T23:39:15.431355565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:39:15.431858 containerd[1457]: time="2026-04-17T23:39:15.431504254Z" level=info msg="RemovePodSandbox \"55ccf747857830037174c5c2b935a05f5c5a1c7d4f948dd9fedbb61d368ca552\" returns successfully" Apr 17 23:39:15.616146 containerd[1457]: time="2026-04-17T23:39:15.615973584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:15.618183 containerd[1457]: time="2026-04-17T23:39:15.618107059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:39:15.619875 containerd[1457]: time="2026-04-17T23:39:15.619734956Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:15.623279 containerd[1457]: time="2026-04-17T23:39:15.623192666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:15.624583 containerd[1457]: time="2026-04-17T23:39:15.624392033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.194303704s" Apr 17 23:39:15.624583 containerd[1457]: time="2026-04-17T23:39:15.624438564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:39:15.627061 containerd[1457]: time="2026-04-17T23:39:15.626679165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:39:15.631669 containerd[1457]: time="2026-04-17T23:39:15.631614409Z" level=info msg="CreateContainer within sandbox \"959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:39:15.654826 containerd[1457]: time="2026-04-17T23:39:15.654759367Z" level=info msg="CreateContainer within sandbox \"959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4e1533de6b80448d2536171a535992a7ef5c2e932e445c6272115c624203d380\"" Apr 17 23:39:15.657331 containerd[1457]: time="2026-04-17T23:39:15.655960365Z" level=info msg="StartContainer for \"4e1533de6b80448d2536171a535992a7ef5c2e932e445c6272115c624203d380\"" Apr 17 23:39:15.713708 systemd[1]: Started cri-containerd-4e1533de6b80448d2536171a535992a7ef5c2e932e445c6272115c624203d380.scope - libcontainer container 4e1533de6b80448d2536171a535992a7ef5c2e932e445c6272115c624203d380. Apr 17 23:39:15.757413 containerd[1457]: time="2026-04-17T23:39:15.757267006Z" level=info msg="StartContainer for \"4e1533de6b80448d2536171a535992a7ef5c2e932e445c6272115c624203d380\" returns successfully" Apr 17 23:39:17.453365 ntpd[1421]: Listen normally on 10 cali2b4e128b214 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:39:17.453533 ntpd[1421]: Listen normally on 11 cali9554e5c73d5 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:39:17.454109 ntpd[1421]: 17 Apr 23:39:17 ntpd[1421]: Listen normally on 10 cali2b4e128b214 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:39:17.454109 ntpd[1421]: 17 Apr 23:39:17 ntpd[1421]: Listen normally on 11 cali9554e5c73d5 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:39:17.454109 ntpd[1421]: 17 Apr 23:39:17 ntpd[1421]: Listen normally on 12 cali8ffac2f3f2f [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:39:17.454109 ntpd[1421]: 17 Apr 23:39:17 ntpd[1421]: Listen normally on 13 calidf101e38354 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:39:17.454109 ntpd[1421]: 17 Apr 23:39:17 ntpd[1421]: Listen normally on 14 cali5f8bca6e331 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:39:17.454109 ntpd[1421]: 17 Apr 23:39:17 ntpd[1421]: Listen normally on 15 calib76af7edca0 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:39:17.454109 ntpd[1421]: 17 Apr 23:39:17 ntpd[1421]: Listen normally on 16 cali20580221bcb [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:39:17.453610 ntpd[1421]: Listen normally on 12 cali8ffac2f3f2f [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:39:17.453670 ntpd[1421]: Listen normally on 13 calidf101e38354 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:39:17.453737 ntpd[1421]: Listen normally on 14 cali5f8bca6e331 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:39:17.453793 ntpd[1421]: Listen normally on 15 calib76af7edca0 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:39:17.453847 ntpd[1421]: Listen normally on 16 cali20580221bcb [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:39:18.081748 containerd[1457]: time="2026-04-17T23:39:18.081683404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:18.083307 containerd[1457]: time="2026-04-17T23:39:18.083247564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:39:18.084487 containerd[1457]: time="2026-04-17T23:39:18.084356151Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:18.090481 containerd[1457]: time="2026-04-17T23:39:18.089118753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:18.090724 containerd[1457]: time="2026-04-17T23:39:18.090669432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.463943529s" Apr 17 23:39:18.090811 containerd[1457]: time="2026-04-17T23:39:18.090726263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:39:18.093240 containerd[1457]: time="2026-04-17T23:39:18.093067757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:39:18.098158 containerd[1457]: time="2026-04-17T23:39:18.098114723Z" level=info msg="CreateContainer within sandbox \"b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:39:18.123383 containerd[1457]: time="2026-04-17T23:39:18.123201367Z" level=info msg="CreateContainer within sandbox \"b0b80b73ae92212ce85a94441f2b016a35dba999c3de45ee12ea1ff6da462b0e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c5d0cb6a9549f1cff23124ca29910c1ca5152ce82e24915b8b3a8cb01f2b7718\"" Apr 17 23:39:18.125296 containerd[1457]: time="2026-04-17T23:39:18.125216744Z" level=info msg="StartContainer for \"c5d0cb6a9549f1cff23124ca29910c1ca5152ce82e24915b8b3a8cb01f2b7718\"" Apr 17 23:39:18.181776 systemd[1]: Started cri-containerd-c5d0cb6a9549f1cff23124ca29910c1ca5152ce82e24915b8b3a8cb01f2b7718.scope - libcontainer container c5d0cb6a9549f1cff23124ca29910c1ca5152ce82e24915b8b3a8cb01f2b7718. Apr 17 23:39:18.246353 containerd[1457]: time="2026-04-17T23:39:18.245912873Z" level=info msg="StartContainer for \"c5d0cb6a9549f1cff23124ca29910c1ca5152ce82e24915b8b3a8cb01f2b7718\" returns successfully" Apr 17 23:39:18.297125 containerd[1457]: time="2026-04-17T23:39:18.297045113Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:18.299408 containerd[1457]: time="2026-04-17T23:39:18.298581380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:39:18.306481 containerd[1457]: time="2026-04-17T23:39:18.305079730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 211.946744ms" Apr 17 23:39:18.306481 containerd[1457]: time="2026-04-17T23:39:18.305133391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:39:18.309738 containerd[1457]: time="2026-04-17T23:39:18.309482510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:39:18.313668 containerd[1457]: time="2026-04-17T23:39:18.313618164Z" level=info msg="CreateContainer within sandbox \"5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:39:18.343552 containerd[1457]: time="2026-04-17T23:39:18.341827747Z" level=info msg="CreateContainer within sandbox \"5e6743d6cbb2c550e0cf101501f363e79d12a4e3a73bd3a0e3e8ab2db6796e8c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0dfa2fdcb89c2fa29ee6e5378bf2bec508b1d9b79191d0e89b205cb09e254f6c\"" Apr 17 23:39:18.348643 containerd[1457]: time="2026-04-17T23:39:18.345706154Z" level=info msg="StartContainer for \"0dfa2fdcb89c2fa29ee6e5378bf2bec508b1d9b79191d0e89b205cb09e254f6c\"" Apr 17 23:39:18.427704 systemd[1]: Started cri-containerd-0dfa2fdcb89c2fa29ee6e5378bf2bec508b1d9b79191d0e89b205cb09e254f6c.scope - libcontainer container 0dfa2fdcb89c2fa29ee6e5378bf2bec508b1d9b79191d0e89b205cb09e254f6c. Apr 17 23:39:18.503772 containerd[1457]: time="2026-04-17T23:39:18.503724154Z" level=info msg="StartContainer for \"0dfa2fdcb89c2fa29ee6e5378bf2bec508b1d9b79191d0e89b205cb09e254f6c\" returns successfully" Apr 17 23:39:18.874435 kubelet[2578]: I0417 23:39:18.874309 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5664b8d97f-trlct" podStartSLOduration=36.987214318 podStartE2EDuration="44.87428858s" podCreationTimestamp="2026-04-17 23:38:34 +0000 UTC" firstStartedPulling="2026-04-17 23:39:10.205715859 +0000 UTC m=+58.177715717" lastFinishedPulling="2026-04-17 23:39:18.092790117 +0000 UTC m=+66.064789979" observedRunningTime="2026-04-17 23:39:18.848965432 +0000 UTC m=+66.820965296" watchObservedRunningTime="2026-04-17 23:39:18.87428858 +0000 UTC m=+66.846288445" Apr 17 23:39:18.875260 kubelet[2578]: I0417 23:39:18.875195 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5664b8d97f-bbdfw" podStartSLOduration=38.283406502 podStartE2EDuration="44.875178896s" podCreationTimestamp="2026-04-17 23:38:34 +0000 UTC" firstStartedPulling="2026-04-17 23:39:11.716098772 +0000 UTC m=+59.688098627" lastFinishedPulling="2026-04-17 23:39:18.307871166 +0000 UTC m=+66.279871021" observedRunningTime="2026-04-17 23:39:18.874106445 +0000 UTC m=+66.846106349" watchObservedRunningTime="2026-04-17 23:39:18.875178896 +0000 UTC m=+66.847178783" Apr 17 23:39:19.836382 kubelet[2578]: I0417 23:39:19.836337 2578 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:39:20.843147 kubelet[2578]: I0417 23:39:20.842157 2578 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:39:20.870982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277403032.mount: Deactivated successfully. Apr 17 23:39:22.085662 containerd[1457]: time="2026-04-17T23:39:22.085030424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.089080 containerd[1457]: time="2026-04-17T23:39:22.088662504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:39:22.090831 containerd[1457]: time="2026-04-17T23:39:22.090788028Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.098863 containerd[1457]: time="2026-04-17T23:39:22.098425347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:22.103876 containerd[1457]: time="2026-04-17T23:39:22.103808891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.794284039s" Apr 17 23:39:22.104119 containerd[1457]: time="2026-04-17T23:39:22.104064865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:39:22.109481 containerd[1457]: time="2026-04-17T23:39:22.107032038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:39:22.112675 containerd[1457]: time="2026-04-17T23:39:22.112486609Z" level=info msg="CreateContainer within sandbox \"9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:39:22.132098 containerd[1457]: time="2026-04-17T23:39:22.132047130Z" level=info msg="CreateContainer within sandbox \"9699a9b2e922852e181fafb0080338e4cedb2ddbededa1e161fd6b5dd933a791\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1cdea5e9f25ea87db4360323ee63c20075d3a08c288ec4f433610b7cad214280\"" Apr 17 23:39:22.135544 containerd[1457]: time="2026-04-17T23:39:22.135499229Z" level=info msg="StartContainer for \"1cdea5e9f25ea87db4360323ee63c20075d3a08c288ec4f433610b7cad214280\"" Apr 17 23:39:22.221882 systemd[1]: Started cri-containerd-1cdea5e9f25ea87db4360323ee63c20075d3a08c288ec4f433610b7cad214280.scope - libcontainer container 1cdea5e9f25ea87db4360323ee63c20075d3a08c288ec4f433610b7cad214280. Apr 17 23:39:22.350684 containerd[1457]: time="2026-04-17T23:39:22.350529513Z" level=info msg="StartContainer for \"1cdea5e9f25ea87db4360323ee63c20075d3a08c288ec4f433610b7cad214280\" returns successfully" Apr 17 23:39:22.893528 kubelet[2578]: I0417 23:39:22.891225 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-pkztx" podStartSLOduration=37.53801174 podStartE2EDuration="47.891201971s" podCreationTimestamp="2026-04-17 23:38:35 +0000 UTC" firstStartedPulling="2026-04-17 23:39:11.752107853 +0000 UTC m=+59.724107707" lastFinishedPulling="2026-04-17 23:39:22.105298082 +0000 UTC m=+70.077297938" observedRunningTime="2026-04-17 23:39:22.882572315 +0000 UTC m=+70.854572180" watchObservedRunningTime="2026-04-17 23:39:22.891201971 +0000 UTC m=+70.863201836" Apr 17 23:39:23.623369 containerd[1457]: time="2026-04-17T23:39:23.623279305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:23.624938 containerd[1457]: time="2026-04-17T23:39:23.624858314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:39:23.626828 containerd[1457]: time="2026-04-17T23:39:23.626756926Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:23.631626 containerd[1457]: time="2026-04-17T23:39:23.631546261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:23.633059 containerd[1457]: time="2026-04-17T23:39:23.632678483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.525603481s" Apr 17 23:39:23.633059 containerd[1457]: time="2026-04-17T23:39:23.632732305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:39:23.639606 containerd[1457]: time="2026-04-17T23:39:23.639562931Z" level=info msg="CreateContainer within sandbox \"959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:39:23.660561 containerd[1457]: time="2026-04-17T23:39:23.660383579Z" level=info msg="CreateContainer within sandbox \"959d70e691f26205d845ae4df1687bc76821ff1e7e465381867dc36581c7c6c3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f57189bb8759b6f3864294100fc0512e3cb1fa85df7f42ccceadfef007b3d5d4\"" Apr 17 23:39:23.661786 containerd[1457]: time="2026-04-17T23:39:23.661721929Z" level=info msg="StartContainer for \"f57189bb8759b6f3864294100fc0512e3cb1fa85df7f42ccceadfef007b3d5d4\"" Apr 17 23:39:23.725703 systemd[1]: Started cri-containerd-f57189bb8759b6f3864294100fc0512e3cb1fa85df7f42ccceadfef007b3d5d4.scope - libcontainer container f57189bb8759b6f3864294100fc0512e3cb1fa85df7f42ccceadfef007b3d5d4. Apr 17 23:39:23.772514 containerd[1457]: time="2026-04-17T23:39:23.771732058Z" level=info msg="StartContainer for \"f57189bb8759b6f3864294100fc0512e3cb1fa85df7f42ccceadfef007b3d5d4\" returns successfully" Apr 17 23:39:24.367995 kubelet[2578]: I0417 23:39:24.367484 2578 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:39:24.367995 kubelet[2578]: I0417 23:39:24.367536 2578 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:39:24.944353 systemd[1]: run-containerd-runc-k8s.io-1cdea5e9f25ea87db4360323ee63c20075d3a08c288ec4f433610b7cad214280-runc.711g1b.mount: Deactivated successfully. Apr 17 23:39:28.853613 kubelet[2578]: I0417 23:39:28.853528 2578 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-zq7p4" podStartSLOduration=39.397672764 podStartE2EDuration="52.853506701s" podCreationTimestamp="2026-04-17 23:38:36 +0000 UTC" firstStartedPulling="2026-04-17 23:39:10.178317337 +0000 UTC m=+58.150317181" lastFinishedPulling="2026-04-17 23:39:23.634151264 +0000 UTC m=+71.606151118" observedRunningTime="2026-04-17 23:39:23.895236638 +0000 UTC m=+71.867236504" watchObservedRunningTime="2026-04-17 23:39:28.853506701 +0000 UTC m=+76.825506570" Apr 17 23:39:34.613004 systemd[1]: Started sshd@7-10.128.0.110:22-50.85.169.122:39660.service - OpenSSH per-connection server daemon (50.85.169.122:39660). Apr 17 23:39:35.395320 sshd[5930]: Accepted publickey for core from 50.85.169.122 port 39660 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:39:35.399201 sshd[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:35.408842 systemd-logind[1428]: New session 8 of user core. Apr 17 23:39:35.417063 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:39:36.066744 sshd[5930]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:36.073729 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:39:36.074799 systemd[1]: sshd@7-10.128.0.110:22-50.85.169.122:39660.service: Deactivated successfully. Apr 17 23:39:36.079635 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:39:36.083395 systemd-logind[1428]: Removed session 8. Apr 17 23:39:41.189853 systemd[1]: Started sshd@8-10.128.0.110:22-50.85.169.122:48366.service - OpenSSH per-connection server daemon (50.85.169.122:48366). Apr 17 23:39:41.805724 kubelet[2578]: I0417 23:39:41.805580 2578 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:39:41.869408 sshd[5957]: Accepted publickey for core from 50.85.169.122 port 48366 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:39:41.872046 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:41.879993 systemd-logind[1428]: New session 9 of user core. Apr 17 23:39:41.886683 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:39:42.435964 sshd[5957]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:42.442695 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:39:42.443076 systemd[1]: sshd@8-10.128.0.110:22-50.85.169.122:48366.service: Deactivated successfully. Apr 17 23:39:42.446353 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:39:42.447928 systemd-logind[1428]: Removed session 9. Apr 17 23:39:47.561953 systemd[1]: Started sshd@9-10.128.0.110:22-50.85.169.122:48382.service - OpenSSH per-connection server daemon (50.85.169.122:48382). Apr 17 23:39:48.232395 sshd[5994]: Accepted publickey for core from 50.85.169.122 port 48382 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:39:48.234895 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:48.242484 systemd-logind[1428]: New session 10 of user core. Apr 17 23:39:48.249700 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:39:48.794562 sshd[5994]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:48.802026 systemd[1]: sshd@9-10.128.0.110:22-50.85.169.122:48382.service: Deactivated successfully. Apr 17 23:39:48.802201 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:39:48.805824 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:39:48.807911 systemd-logind[1428]: Removed session 10. Apr 17 23:39:53.919915 systemd[1]: Started sshd@10-10.128.0.110:22-50.85.169.122:41832.service - OpenSSH per-connection server daemon (50.85.169.122:41832). Apr 17 23:39:54.605416 sshd[6016]: Accepted publickey for core from 50.85.169.122 port 41832 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:39:54.608024 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:54.617746 systemd-logind[1428]: New session 11 of user core. Apr 17 23:39:54.625844 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:39:55.204197 sshd[6016]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:55.210093 systemd[1]: sshd@10-10.128.0.110:22-50.85.169.122:41832.service: Deactivated successfully. Apr 17 23:39:55.213063 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:39:55.215411 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:39:55.217566 systemd-logind[1428]: Removed session 11. Apr 17 23:39:55.329959 systemd[1]: Started sshd@11-10.128.0.110:22-50.85.169.122:41844.service - OpenSSH per-connection server daemon (50.85.169.122:41844). Apr 17 23:39:56.010052 sshd[6066]: Accepted publickey for core from 50.85.169.122 port 41844 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:39:56.012048 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:56.018198 systemd-logind[1428]: New session 12 of user core. Apr 17 23:39:56.024722 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:39:56.658377 sshd[6066]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:56.663190 systemd[1]: sshd@11-10.128.0.110:22-50.85.169.122:41844.service: Deactivated successfully. Apr 17 23:39:56.665990 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:39:56.668559 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:39:56.670379 systemd-logind[1428]: Removed session 12. Apr 17 23:39:56.775539 systemd[1]: Started sshd@12-10.128.0.110:22-50.85.169.122:41848.service - OpenSSH per-connection server daemon (50.85.169.122:41848). Apr 17 23:39:57.456508 sshd[6077]: Accepted publickey for core from 50.85.169.122 port 41848 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:39:57.457622 sshd[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:57.463300 systemd-logind[1428]: New session 13 of user core. Apr 17 23:39:57.471669 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:39:58.009517 sshd[6077]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:58.014648 systemd[1]: sshd@12-10.128.0.110:22-50.85.169.122:41848.service: Deactivated successfully. Apr 17 23:39:58.017242 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:39:58.020110 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:39:58.022062 systemd-logind[1428]: Removed session 13. Apr 17 23:40:03.139859 systemd[1]: Started sshd@13-10.128.0.110:22-50.85.169.122:43958.service - OpenSSH per-connection server daemon (50.85.169.122:43958). Apr 17 23:40:03.825251 sshd[6114]: Accepted publickey for core from 50.85.169.122 port 43958 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:40:03.826913 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:03.840765 systemd-logind[1428]: New session 14 of user core. Apr 17 23:40:03.845229 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:40:04.447808 sshd[6114]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:04.454492 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:40:04.455419 systemd[1]: sshd@13-10.128.0.110:22-50.85.169.122:43958.service: Deactivated successfully. Apr 17 23:40:04.463029 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:40:04.467129 systemd-logind[1428]: Removed session 14. Apr 17 23:40:04.568947 systemd[1]: Started sshd@14-10.128.0.110:22-50.85.169.122:43964.service - OpenSSH per-connection server daemon (50.85.169.122:43964). Apr 17 23:40:05.249944 sshd[6126]: Accepted publickey for core from 50.85.169.122 port 43964 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:40:05.251811 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:05.258629 systemd-logind[1428]: New session 15 of user core. Apr 17 23:40:05.269860 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:40:05.872409 sshd[6126]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:05.878124 systemd[1]: sshd@14-10.128.0.110:22-50.85.169.122:43964.service: Deactivated successfully. Apr 17 23:40:05.881619 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:40:05.883156 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:40:05.885021 systemd-logind[1428]: Removed session 15. Apr 17 23:40:06.004937 systemd[1]: Started sshd@15-10.128.0.110:22-50.85.169.122:43968.service - OpenSSH per-connection server daemon (50.85.169.122:43968). Apr 17 23:40:06.691499 sshd[6136]: Accepted publickey for core from 50.85.169.122 port 43968 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:40:06.693377 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:06.700049 systemd-logind[1428]: New session 16 of user core. Apr 17 23:40:06.706781 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:40:07.875254 sshd[6136]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:07.881236 systemd[1]: sshd@15-10.128.0.110:22-50.85.169.122:43968.service: Deactivated successfully. Apr 17 23:40:07.885079 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:40:07.886358 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:40:07.888149 systemd-logind[1428]: Removed session 16. Apr 17 23:40:08.000924 systemd[1]: Started sshd@16-10.128.0.110:22-50.85.169.122:43970.service - OpenSSH per-connection server daemon (50.85.169.122:43970). Apr 17 23:40:08.684542 sshd[6162]: Accepted publickey for core from 50.85.169.122 port 43970 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:40:08.686760 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:08.693616 systemd-logind[1428]: New session 17 of user core. Apr 17 23:40:08.699733 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:40:09.383272 sshd[6162]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:09.389258 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:40:09.389671 systemd[1]: sshd@16-10.128.0.110:22-50.85.169.122:43970.service: Deactivated successfully. Apr 17 23:40:09.393969 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:40:09.395652 systemd-logind[1428]: Removed session 17. Apr 17 23:40:09.505862 systemd[1]: Started sshd@17-10.128.0.110:22-50.85.169.122:43976.service - OpenSSH per-connection server daemon (50.85.169.122:43976). Apr 17 23:40:10.174343 sshd[6175]: Accepted publickey for core from 50.85.169.122 port 43976 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:40:10.176241 sshd[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:10.183086 systemd-logind[1428]: New session 18 of user core. Apr 17 23:40:10.190665 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:40:10.726285 sshd[6175]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:10.732423 systemd[1]: sshd@17-10.128.0.110:22-50.85.169.122:43976.service: Deactivated successfully. Apr 17 23:40:10.735978 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:40:10.737394 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:40:10.738940 systemd-logind[1428]: Removed session 18. Apr 17 23:40:15.437390 containerd[1457]: time="2026-04-17T23:40:15.437323193Z" level=info msg="StopPodSandbox for \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\"" Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.493 [WARNING][6238] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3", Pod:"coredns-7d764666f9-c6pfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20580221bcb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.494 [INFO][6238] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.494 [INFO][6238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" iface="eth0" netns="" Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.494 [INFO][6238] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.494 [INFO][6238] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.528 [INFO][6245] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" HandleID="k8s-pod-network.713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.529 [INFO][6245] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.529 [INFO][6245] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.537 [WARNING][6245] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" HandleID="k8s-pod-network.713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.537 [INFO][6245] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" HandleID="k8s-pod-network.713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.541 [INFO][6245] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:15.545022 containerd[1457]: 2026-04-17 23:40:15.543 [INFO][6238] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:40:15.545950 containerd[1457]: time="2026-04-17T23:40:15.545071938Z" level=info msg="TearDown network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\" successfully" Apr 17 23:40:15.545950 containerd[1457]: time="2026-04-17T23:40:15.545111663Z" level=info msg="StopPodSandbox for \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\" returns successfully" Apr 17 23:40:15.546207 containerd[1457]: time="2026-04-17T23:40:15.546179570Z" level=info msg="RemovePodSandbox for \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\"" Apr 17 23:40:15.546263 containerd[1457]: time="2026-04-17T23:40:15.546226483Z" level=info msg="Forcibly stopping sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\"" Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.604 [WARNING][6259] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6c9e2a5f-cf47-40c7-aad6-17d5a4bfa72f", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4e65f3a655feaa8febc1", ContainerID:"fc9bb59261c995829323c8dddbae116b3203ad042b6f36f7c5bf34a16a8a33c3", Pod:"coredns-7d764666f9-c6pfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20580221bcb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.605 [INFO][6259] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.605 [INFO][6259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" iface="eth0" netns="" Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.605 [INFO][6259] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.605 [INFO][6259] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.649 [INFO][6267] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" HandleID="k8s-pod-network.713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.649 [INFO][6267] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.649 [INFO][6267] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.659 [WARNING][6267] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" HandleID="k8s-pod-network.713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.659 [INFO][6267] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" HandleID="k8s-pod-network.713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Workload="ci--4081--3--6--nightly--20260417--2100--4e65f3a655feaa8febc1-k8s-coredns--7d764666f9--c6pfx-eth0" Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.661 [INFO][6267] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:15.664596 containerd[1457]: 2026-04-17 23:40:15.663 [INFO][6259] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878" Apr 17 23:40:15.666092 containerd[1457]: time="2026-04-17T23:40:15.664669273Z" level=info msg="TearDown network for sandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\" successfully" Apr 17 23:40:15.674226 containerd[1457]: time="2026-04-17T23:40:15.674157633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:40:15.674431 containerd[1457]: time="2026-04-17T23:40:15.674273311Z" level=info msg="RemovePodSandbox \"713ff1475f0d8d9809188796bcd40b4c7817338b5c8d0303441dd4b497784878\" returns successfully" Apr 17 23:40:15.850929 systemd[1]: Started sshd@18-10.128.0.110:22-50.85.169.122:60834.service - OpenSSH per-connection server daemon (50.85.169.122:60834). Apr 17 23:40:16.529920 sshd[6275]: Accepted publickey for core from 50.85.169.122 port 60834 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:40:16.531824 sshd[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:16.539525 systemd-logind[1428]: New session 19 of user core. Apr 17 23:40:16.543813 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:40:17.078409 sshd[6275]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:17.083871 systemd[1]: sshd@18-10.128.0.110:22-50.85.169.122:60834.service: Deactivated successfully. Apr 17 23:40:17.087003 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:40:17.089242 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:40:17.091768 systemd-logind[1428]: Removed session 19. Apr 17 23:40:22.199829 systemd[1]: Started sshd@19-10.128.0.110:22-50.85.169.122:60850.service - OpenSSH per-connection server daemon (50.85.169.122:60850). Apr 17 23:40:22.880012 sshd[6301]: Accepted publickey for core from 50.85.169.122 port 60850 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:40:22.882032 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:22.888471 systemd-logind[1428]: New session 20 of user core. Apr 17 23:40:22.893971 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:40:23.438642 sshd[6301]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:23.445014 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:40:23.445717 systemd[1]: sshd@19-10.128.0.110:22-50.85.169.122:60850.service: Deactivated successfully. Apr 17 23:40:23.449147 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:40:23.450714 systemd-logind[1428]: Removed session 20. Apr 17 23:40:24.908539 systemd[1]: run-containerd-runc-k8s.io-1cdea5e9f25ea87db4360323ee63c20075d3a08c288ec4f433610b7cad214280-runc.ePSRHW.mount: Deactivated successfully.