Apr 17 23:43:45.115630 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:43:45.115684 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:43:45.115702 kernel: BIOS-provided physical RAM map: Apr 17 23:43:45.115716 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 17 23:43:45.115729 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 17 23:43:45.115742 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 17 23:43:45.115758 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 17 23:43:45.115776 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 17 23:43:45.115790 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 17 23:43:45.115804 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 17 23:43:45.115818 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 17 23:43:45.115832 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 17 23:43:45.115847 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 17 23:43:45.115861 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 17 23:43:45.115882 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 17 23:43:45.115898 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 17 23:43:45.115912 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 17 23:43:45.115947 kernel: NX (Execute Disable) protection: active Apr 17 23:43:45.115961 kernel: APIC: Static calls initialized Apr 17 23:43:45.115975 kernel: efi: EFI v2.7 by EDK II Apr 17 23:43:45.115989 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd300018 Apr 17 23:43:45.116003 kernel: SMBIOS 2.4 present. Apr 17 23:43:45.116019 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026 Apr 17 23:43:45.116034 kernel: Hypervisor detected: KVM Apr 17 23:43:45.116055 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:43:45.116072 kernel: kvm-clock: using sched offset of 13286102190 cycles Apr 17 23:43:45.116087 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:43:45.116103 kernel: tsc: Detected 2299.998 MHz processor Apr 17 23:43:45.116117 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:43:45.116132 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:43:45.116148 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 17 23:43:45.116164 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 17 23:43:45.116180 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:43:45.116200 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 17 23:43:45.116216 kernel: Using GB pages for direct mapping Apr 17 23:43:45.116231 kernel: Secure boot disabled Apr 17 23:43:45.116247 kernel: ACPI: Early table checksum verification disabled Apr 17 23:43:45.116263 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 17 23:43:45.116279 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 17 23:43:45.116296 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 17 23:43:45.116318 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 17 23:43:45.116347 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 17 23:43:45.116363 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250807) Apr 17 23:43:45.116380 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 17 23:43:45.116406 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 17 23:43:45.116422 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 17 23:43:45.117091 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 17 23:43:45.117123 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 17 23:43:45.117142 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 17 23:43:45.117160 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 17 23:43:45.117176 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 17 23:43:45.117194 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 17 23:43:45.117212 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 17 23:43:45.117230 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 17 23:43:45.117247 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 17 23:43:45.117265 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 17 23:43:45.117288 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 17 23:43:45.117305 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:43:45.117323 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:43:45.117341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 17 23:43:45.117358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 17 23:43:45.117375 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 17 23:43:45.117393 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 17 23:43:45.117411 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 17 23:43:45.117429 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Apr 17 23:43:45.117452 kernel: Zone ranges: Apr 17 23:43:45.117471 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:43:45.117489 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:43:45.117506 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 17 23:43:45.117524 kernel: Movable zone start for each node Apr 17 23:43:45.117542 kernel: Early memory node ranges Apr 17 23:43:45.117559 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 17 23:43:45.117577 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 17 23:43:45.117594 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 17 23:43:45.117611 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 17 23:43:45.117633 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 17 23:43:45.117649 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 17 23:43:45.117677 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:43:45.117694 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 17 23:43:45.117709 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 17 23:43:45.117727 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 17 23:43:45.117744 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 17 23:43:45.117762 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:43:45.117779 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:43:45.117801 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:43:45.117818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:43:45.117835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:43:45.117851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:43:45.117867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:43:45.117885 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:43:45.117902 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:43:45.117919 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 17 23:43:45.118303 kernel: Booting paravirtualized kernel on KVM Apr 17 23:43:45.118326 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:43:45.118343 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:43:45.118360 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:43:45.118377 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:43:45.118394 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:43:45.118411 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:43:45.118428 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:43:45.118447 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:43:45.118469 kernel: random: crng init done Apr 17 23:43:45.118486 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 17 23:43:45.118504 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:43:45.118521 kernel: Fallback order for Node 0: 0 Apr 17 23:43:45.118540 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 17 23:43:45.118567 kernel: Policy zone: Normal Apr 17 23:43:45.118587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:43:45.118603 kernel: software IO TLB: area num 2. Apr 17 23:43:45.118621 kernel: Memory: 7513256K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347068K reserved, 0K cma-reserved) Apr 17 23:43:45.118642 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:43:45.118659 kernel: Kernel/User page tables isolation: enabled Apr 17 23:43:45.118686 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:43:45.118704 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:43:45.118721 kernel: Dynamic Preempt: voluntary Apr 17 23:43:45.118737 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:43:45.118755 kernel: rcu: RCU event tracing is enabled. Apr 17 23:43:45.118772 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:43:45.118808 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:43:45.118826 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:43:45.118848 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:43:45.118867 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:43:45.118889 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:43:45.118907 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:43:45.118949 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:43:45.118968 kernel: Console: colour dummy device 80x25 Apr 17 23:43:45.118986 kernel: printk: console [ttyS0] enabled Apr 17 23:43:45.119009 kernel: ACPI: Core revision 20230628 Apr 17 23:43:45.119027 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:43:45.119046 kernel: x2apic enabled Apr 17 23:43:45.119064 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:43:45.119082 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 17 23:43:45.119110 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 17 23:43:45.119128 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 17 23:43:45.119146 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 17 23:43:45.119164 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 17 23:43:45.119187 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:43:45.119206 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 17 23:43:45.119225 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 17 23:43:45.119243 kernel: Spectre V2 : Mitigation: IBRS Apr 17 23:43:45.119261 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:43:45.119283 kernel: RETBleed: Mitigation: IBRS Apr 17 23:43:45.119303 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 23:43:45.119320 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 17 23:43:45.119342 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 23:43:45.119361 kernel: MDS: Mitigation: Clear CPU buffers Apr 17 23:43:45.119379 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:43:45.119396 kernel: active return thunk: its_return_thunk Apr 17 23:43:45.119414 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:43:45.119432 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:43:45.119450 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:43:45.119468 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:43:45.119486 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:43:45.120974 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 17 23:43:45.121000 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:43:45.121019 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:43:45.121038 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:43:45.121057 kernel: landlock: Up and running. Apr 17 23:43:45.121075 kernel: SELinux: Initializing. Apr 17 23:43:45.121094 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:43:45.121113 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:43:45.121132 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 17 23:43:45.121157 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:43:45.121176 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:43:45.121194 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:43:45.121213 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 17 23:43:45.121231 kernel: signal: max sigframe size: 1776 Apr 17 23:43:45.121250 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:43:45.121270 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:43:45.121288 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:43:45.121307 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:43:45.121330 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:43:45.121348 kernel: .... node #0, CPUs: #1 Apr 17 23:43:45.121368 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:43:45.121388 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:43:45.121406 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:43:45.121425 kernel: smpboot: Max logical packages: 1 Apr 17 23:43:45.121443 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 17 23:43:45.121462 kernel: devtmpfs: initialized Apr 17 23:43:45.121481 kernel: x86/mm: Memory block size: 128MB Apr 17 23:43:45.121504 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 17 23:43:45.121523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:43:45.121542 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:43:45.121561 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:43:45.121580 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:43:45.121599 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:43:45.121618 kernel: audit: type=2000 audit(1776469423.634:1): state=initialized audit_enabled=0 res=1 Apr 17 23:43:45.121636 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:43:45.121656 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:43:45.121690 kernel: cpuidle: using governor menu Apr 17 23:43:45.121707 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:43:45.121726 kernel: dca service started, version 1.12.1 Apr 17 23:43:45.121745 kernel: PCI: Using configuration type 1 for base access Apr 17 23:43:45.121763 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:43:45.121781 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:43:45.121799 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:43:45.121818 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:43:45.121842 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:43:45.121861 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:43:45.121879 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:43:45.121898 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:43:45.121916 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:43:45.121978 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:43:45.121993 kernel: ACPI: Interpreter enabled Apr 17 23:43:45.122009 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:43:45.122024 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:43:45.122041 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:43:45.122063 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 17 23:43:45.122078 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 17 23:43:45.122094 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:43:45.124007 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:43:45.124251 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:43:45.124455 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:43:45.124480 kernel: PCI host bridge to bus 0000:00 Apr 17 23:43:45.124692 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:43:45.124880 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:43:45.125082 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:43:45.125265 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 17 23:43:45.125448 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:43:45.125656 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:43:45.127065 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 17 23:43:45.127302 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 17 23:43:45.127499 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:43:45.127713 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 17 23:43:45.127911 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 17 23:43:45.129180 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 17 23:43:45.129389 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:43:45.129590 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 17 23:43:45.129791 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 17 23:43:45.131040 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:43:45.131247 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 17 23:43:45.131439 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 17 23:43:45.131466 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:43:45.131485 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:43:45.131510 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:43:45.131529 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:43:45.131549 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:43:45.131568 kernel: iommu: Default domain type: Translated Apr 17 23:43:45.131588 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:43:45.131606 kernel: efivars: Registered efivars operations Apr 17 23:43:45.131625 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:43:45.131643 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:43:45.131685 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 17 23:43:45.131709 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 17 23:43:45.131727 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 17 23:43:45.131745 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 17 23:43:45.131764 kernel: vgaarb: loaded Apr 17 23:43:45.131784 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:43:45.131803 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:43:45.131823 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:43:45.131842 kernel: pnp: PnP ACPI init Apr 17 23:43:45.131862 kernel: pnp: PnP ACPI: found 7 devices Apr 17 23:43:45.131886 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:43:45.131906 kernel: NET: Registered PF_INET protocol family Apr 17 23:43:45.132528 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:43:45.132557 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 17 23:43:45.132577 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:43:45.132598 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:43:45.132618 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 17 23:43:45.132638 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 17 23:43:45.132658 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:43:45.132697 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:43:45.132716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:43:45.132735 kernel: NET: Registered PF_XDP protocol family Apr 17 23:43:45.132972 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:43:45.133168 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:43:45.133340 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:43:45.133519 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 17 23:43:45.133726 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:43:45.133760 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:43:45.133780 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:43:45.133801 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 17 23:43:45.133821 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:43:45.133840 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 17 23:43:45.133860 kernel: clocksource: Switched to clocksource tsc Apr 17 23:43:45.133880 kernel: Initialise system trusted keyrings Apr 17 23:43:45.133900 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 17 23:43:45.133949 kernel: Key type asymmetric registered Apr 17 23:43:45.133966 kernel: Asymmetric key parser 'x509' registered Apr 17 23:43:45.133981 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:43:45.133997 kernel: io scheduler mq-deadline registered Apr 17 23:43:45.134015 kernel: io scheduler kyber registered Apr 17 23:43:45.134034 kernel: io scheduler bfq registered Apr 17 23:43:45.134052 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:43:45.134072 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 17 23:43:45.134280 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 17 23:43:45.134314 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 17 23:43:45.134506 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 17 23:43:45.134531 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 17 23:43:45.134732 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 17 23:43:45.134757 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:43:45.134776 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:43:45.134794 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 17 23:43:45.134811 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 17 23:43:45.134835 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 17 23:43:45.135083 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 17 23:43:45.135112 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:43:45.135131 kernel: i8042: Warning: Keylock active Apr 17 23:43:45.135148 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:43:45.135166 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:43:45.135370 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:43:45.135559 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:43:45.135757 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:43:44 UTC (1776469424) Apr 17 23:43:45.137978 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:43:45.138016 kernel: intel_pstate: CPU model not supported Apr 17 23:43:45.138037 kernel: pstore: Using crash dump compression: deflate Apr 17 23:43:45.138056 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:43:45.138074 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:43:45.138093 kernel: Segment Routing with IPv6 Apr 17 23:43:45.138112 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:43:45.138138 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:43:45.138158 kernel: Key type dns_resolver registered Apr 17 23:43:45.138177 kernel: IPI shorthand broadcast: enabled Apr 17 23:43:45.138196 kernel: sched_clock: Marking stable (888004740, 154276338)->(1089503230, -47222152) Apr 17 23:43:45.138215 kernel: registered taskstats version 1 Apr 17 23:43:45.138235 kernel: Loading compiled-in X.509 certificates Apr 17 23:43:45.138254 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:43:45.138272 kernel: Key type .fscrypt registered Apr 17 23:43:45.138291 kernel: Key type fscrypt-provisioning registered Apr 17 23:43:45.138314 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:43:45.138335 kernel: ima: No architecture policies found Apr 17 23:43:45.138354 kernel: clk: Disabling unused clocks Apr 17 23:43:45.138373 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:43:45.138393 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:43:45.138412 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:43:45.138431 kernel: Run /init as init process Apr 17 23:43:45.138450 kernel: with arguments: Apr 17 23:43:45.138469 kernel: /init Apr 17 23:43:45.138488 kernel: with environment: Apr 17 23:43:45.138512 kernel: HOME=/ Apr 17 23:43:45.138531 kernel: TERM=linux Apr 17 23:43:45.138551 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 17 23:43:45.138574 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:43:45.138598 systemd[1]: Detected virtualization google. Apr 17 23:43:45.138618 systemd[1]: Detected architecture x86-64. Apr 17 23:43:45.138638 systemd[1]: Running in initrd. Apr 17 23:43:45.138672 systemd[1]: No hostname configured, using default hostname. Apr 17 23:43:45.138692 systemd[1]: Hostname set to . Apr 17 23:43:45.138713 systemd[1]: Initializing machine ID from random generator. Apr 17 23:43:45.138732 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:43:45.138753 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:43:45.138773 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:43:45.138795 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:43:45.138816 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:43:45.138841 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:43:45.138861 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:43:45.138885 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:43:45.138904 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:43:45.138948 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:43:45.138971 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:43:45.138998 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:43:45.139017 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:43:45.139059 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:43:45.139085 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:43:45.139111 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:43:45.139132 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:43:45.139155 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:43:45.139181 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:43:45.139203 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:43:45.139225 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:43:45.139245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:43:45.139267 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:43:45.139289 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:43:45.139312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:43:45.139333 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:43:45.139358 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:43:45.139380 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:43:45.139402 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:43:45.139424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:45.139482 systemd-journald[184]: Collecting audit messages is disabled. Apr 17 23:43:45.139534 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:43:45.139555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:43:45.139578 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:43:45.139601 systemd-journald[184]: Journal started Apr 17 23:43:45.139647 systemd-journald[184]: Runtime Journal (/run/log/journal/2287602114b1496fb26a66b49a6a9078) is 8.0M, max 148.7M, 140.7M free. Apr 17 23:43:45.152951 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:43:45.153806 systemd-modules-load[185]: Inserted module 'overlay' Apr 17 23:43:45.159081 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:43:45.165669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:45.168834 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:43:45.193955 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:43:45.195247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:43:45.198547 kernel: Bridge firewalling registered Apr 17 23:43:45.197419 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 17 23:43:45.198970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:43:45.201207 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:43:45.201783 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:43:45.214907 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:43:45.242171 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:45.252512 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:43:45.257451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:43:45.266475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:43:45.277223 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:43:45.281125 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:43:45.321463 dracut-cmdline[215]: dracut-dracut-053 Apr 17 23:43:45.326186 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:43:45.347490 systemd-resolved[217]: Positive Trust Anchors: Apr 17 23:43:45.348106 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:43:45.348333 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:43:45.355633 systemd-resolved[217]: Defaulting to hostname 'linux'. Apr 17 23:43:45.360853 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:43:45.370193 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:43:45.434977 kernel: SCSI subsystem initialized Apr 17 23:43:45.446985 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:43:45.458992 kernel: iscsi: registered transport (tcp) Apr 17 23:43:45.484132 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:43:45.484220 kernel: QLogic iSCSI HBA Driver Apr 17 23:43:45.537855 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:43:45.544155 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:43:45.582962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:43:45.583065 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:43:45.583094 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:43:45.629967 kernel: raid6: avx2x4 gen() 17963 MB/s Apr 17 23:43:45.646968 kernel: raid6: avx2x2 gen() 17648 MB/s Apr 17 23:43:45.664953 kernel: raid6: avx2x1 gen() 13502 MB/s Apr 17 23:43:45.665020 kernel: raid6: using algorithm avx2x4 gen() 17963 MB/s Apr 17 23:43:45.682451 kernel: raid6: .... xor() 6763 MB/s, rmw enabled Apr 17 23:43:45.682517 kernel: raid6: using avx2x2 recovery algorithm Apr 17 23:43:45.705978 kernel: xor: automatically using best checksumming function avx Apr 17 23:43:45.879967 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:43:45.894473 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:43:45.901204 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:43:45.928437 systemd-udevd[400]: Using default interface naming scheme 'v255'. Apr 17 23:43:45.935766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:43:45.944581 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:43:45.978853 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 17 23:43:46.018293 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:43:46.033187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:43:46.128882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:43:46.135153 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:43:46.171790 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:43:46.175055 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:43:46.212114 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:43:46.224215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:43:46.246965 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:43:46.275191 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:43:46.292183 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:43:46.303970 kernel: AES CTR mode by8 optimization enabled Apr 17 23:43:46.304064 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:43:46.315245 kernel: blk-mq: reduced tag depth to 10240 Apr 17 23:43:46.315897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:43:46.316162 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:46.379128 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 17 23:43:46.391427 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:43:46.417062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:43:46.417615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:46.449081 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 17 23:43:46.449442 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 17 23:43:46.449682 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 17 23:43:46.453969 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 17 23:43:46.454467 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:43:46.474150 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:46.529163 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:43:46.529214 kernel: GPT:17805311 != 33554431 Apr 17 23:43:46.529240 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:43:46.529263 kernel: GPT:17805311 != 33554431 Apr 17 23:43:46.529284 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:43:46.529308 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:46.529331 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 17 23:43:46.501365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:46.550858 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:43:46.586992 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (461) Apr 17 23:43:46.604953 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (469) Apr 17 23:43:46.611366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:46.631663 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 17 23:43:46.646225 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 17 23:43:46.670770 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 17 23:43:46.688118 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 17 23:43:46.718619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 17 23:43:46.741138 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:43:46.757490 disk-uuid[542]: Primary Header is updated. Apr 17 23:43:46.757490 disk-uuid[542]: Secondary Entries is updated. Apr 17 23:43:46.757490 disk-uuid[542]: Secondary Header is updated. Apr 17 23:43:46.779163 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:46.775167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:43:46.823607 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:46.823642 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:46.862607 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:47.820054 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:47.820138 disk-uuid[543]: The operation has completed successfully. Apr 17 23:43:47.898202 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:43:47.898353 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:43:47.929195 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:43:47.949433 sh[569]: Success Apr 17 23:43:47.972954 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:43:48.068029 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:43:48.075551 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:43:48.100536 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:43:48.155499 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:43:48.155594 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:48.155620 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:43:48.165089 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:43:48.177727 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:43:48.209966 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:43:48.217547 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:43:48.218597 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:43:48.224172 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:43:48.238183 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:43:48.307535 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:48.307626 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:48.307653 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:43:48.325957 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:43:48.326035 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:43:48.341395 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:43:48.359453 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:48.359104 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:43:48.375317 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:43:48.482382 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:43:48.489157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:43:48.592660 systemd-networkd[752]: lo: Link UP Apr 17 23:43:48.592673 systemd-networkd[752]: lo: Gained carrier Apr 17 23:43:48.595979 systemd-networkd[752]: Enumeration completed Apr 17 23:43:48.596150 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:43:48.603208 ignition[660]: Ignition 2.19.0 Apr 17 23:43:48.596742 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:43:48.603218 ignition[660]: Stage: fetch-offline Apr 17 23:43:48.596749 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:43:48.603264 ignition[660]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:48.599877 systemd-networkd[752]: eth0: Link UP Apr 17 23:43:48.603276 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:48.599884 systemd-networkd[752]: eth0: Gained carrier Apr 17 23:43:48.603430 ignition[660]: parsed url from cmdline: "" Apr 17 23:43:48.599900 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:43:48.603437 ignition[660]: no config URL provided Apr 17 23:43:48.612034 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:43:48.603445 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:43:48.612053 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.99/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 17 23:43:48.603458 ignition[660]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:43:48.614552 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:43:48.603467 ignition[660]: failed to fetch config: resource requires networking Apr 17 23:43:48.632865 systemd[1]: Reached target network.target - Network. Apr 17 23:43:48.603679 ignition[660]: Ignition finished successfully Apr 17 23:43:48.655204 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:43:48.703136 ignition[760]: Ignition 2.19.0 Apr 17 23:43:48.716165 unknown[760]: fetched base config from "system" Apr 17 23:43:48.703146 ignition[760]: Stage: fetch Apr 17 23:43:48.716177 unknown[760]: fetched base config from "system" Apr 17 23:43:48.703337 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:48.716186 unknown[760]: fetched user config from "gcp" Apr 17 23:43:48.703350 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:48.719126 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:43:48.703495 ignition[760]: parsed url from cmdline: "" Apr 17 23:43:48.742195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:43:48.703502 ignition[760]: no config URL provided Apr 17 23:43:48.801488 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:43:48.703512 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:43:48.828147 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:43:48.703527 ignition[760]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:43:48.882632 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:43:48.703558 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 17 23:43:48.890751 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:43:48.708466 ignition[760]: GET result: OK Apr 17 23:43:48.917237 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:43:48.708561 ignition[760]: parsing config with SHA512: 4edc912f907114c445cf4874f97e0103f9aaad718a3dbc5258b10e4e8c327b53fd2049df279fc33a45f9d35dc5735972ca70c3e0654e3237e1cdfe29d9dcb453 Apr 17 23:43:48.923320 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:43:48.716753 ignition[760]: fetch: fetch complete Apr 17 23:43:48.940355 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:43:48.716760 ignition[760]: fetch: fetch passed Apr 17 23:43:48.970229 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:43:48.716823 ignition[760]: Ignition finished successfully Apr 17 23:43:48.999240 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:43:48.788987 ignition[766]: Ignition 2.19.0 Apr 17 23:43:48.788998 ignition[766]: Stage: kargs Apr 17 23:43:48.789273 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:48.789287 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:48.790369 ignition[766]: kargs: kargs passed Apr 17 23:43:48.790427 ignition[766]: Ignition finished successfully Apr 17 23:43:48.856483 ignition[771]: Ignition 2.19.0 Apr 17 23:43:48.856496 ignition[771]: Stage: disks Apr 17 23:43:48.856743 ignition[771]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:48.856761 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:48.858467 ignition[771]: disks: disks passed Apr 17 23:43:48.858533 ignition[771]: Ignition finished successfully Apr 17 23:43:49.055845 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 17 23:43:49.222094 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:43:49.229121 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:43:49.383990 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:43:49.384886 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:43:49.393881 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:43:49.418115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:43:49.447108 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:43:49.447999 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:43:49.529145 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (788) Apr 17 23:43:49.529197 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:49.529235 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:49.529259 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:43:49.529282 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:43:49.529305 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:43:49.448095 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:43:49.448140 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:43:49.521912 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:43:49.557817 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:43:49.580187 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:43:49.723245 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:43:49.735670 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:43:49.746098 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:43:49.756088 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:43:49.912141 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:43:49.919164 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:43:49.954982 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:49.961195 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:43:49.971615 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:43:50.021436 ignition[900]: INFO : Ignition 2.19.0 Apr 17 23:43:50.021436 ignition[900]: INFO : Stage: mount Apr 17 23:43:50.021436 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:50.021436 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:50.060283 ignition[900]: INFO : mount: mount passed Apr 17 23:43:50.060283 ignition[900]: INFO : Ignition finished successfully Apr 17 23:43:50.024887 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:43:50.029729 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:43:50.049086 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:43:50.221273 systemd-networkd[752]: eth0: Gained IPv6LL Apr 17 23:43:50.391202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:43:50.440961 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (912) Apr 17 23:43:50.458490 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:50.458602 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:50.458629 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:43:50.481437 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:43:50.481547 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:43:50.485183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:43:50.525741 ignition[929]: INFO : Ignition 2.19.0 Apr 17 23:43:50.525741 ignition[929]: INFO : Stage: files Apr 17 23:43:50.542102 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:50.542102 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:50.542102 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:43:50.542102 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:43:50.542102 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:43:50.542102 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:43:50.542102 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:43:50.542102 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:43:50.540027 unknown[929]: wrote ssh authorized keys file for user: core Apr 17 23:43:50.645156 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:43:50.645156 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:43:50.889054 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:43:51.188662 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:43:51.188662 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:43:51.613836 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:43:54.124331 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:43:54.124331 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:43:54.163089 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:43:54.163089 ignition[929]: INFO : files: files passed Apr 17 23:43:54.163089 ignition[929]: INFO : Ignition finished successfully Apr 17 23:43:54.128826 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:43:54.149183 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:43:54.196187 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:43:54.207667 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:43:54.375141 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:43:54.375141 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:43:54.207796 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:43:54.413271 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:43:54.271114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:43:54.277467 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:43:54.317175 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:43:54.401687 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:43:54.401812 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:43:54.424966 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:43:54.448124 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:43:54.469380 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:43:54.475177 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:43:54.538776 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:43:54.560170 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:43:54.603546 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:43:54.612500 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:43:54.652311 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:43:54.652734 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:43:54.652950 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:43:54.686530 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:43:54.714277 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:43:54.714704 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:43:54.730548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:43:54.768305 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:43:54.768724 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:43:54.786528 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:43:54.803538 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:43:54.841356 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:43:54.858280 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:43:54.858638 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:43:54.859112 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:43:54.899423 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:43:54.899814 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:43:54.917438 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:43:54.917598 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:43:54.937434 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:43:54.937630 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:43:54.978495 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:43:54.978726 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:43:55.007504 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:43:55.007648 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:43:55.024230 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:43:55.065319 ignition[981]: INFO : Ignition 2.19.0 Apr 17 23:43:55.065319 ignition[981]: INFO : Stage: umount Apr 17 23:43:55.065319 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:55.065319 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:55.065319 ignition[981]: INFO : umount: umount passed Apr 17 23:43:55.065319 ignition[981]: INFO : Ignition finished successfully Apr 17 23:43:55.073105 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:43:55.073405 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:43:55.097260 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:43:55.106088 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:43:55.106478 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:43:55.156448 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:43:55.156639 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:43:55.192024 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:43:55.193072 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:43:55.193194 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:43:55.207815 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:43:55.207959 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:43:55.229377 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:43:55.229504 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:43:55.250537 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:43:55.250600 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:43:55.268204 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:43:55.268322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:43:55.288254 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:43:55.288348 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:43:55.307215 systemd[1]: Stopped target network.target - Network. Apr 17 23:43:55.323161 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:43:55.323290 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:43:55.341378 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:43:55.359136 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:43:55.363107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:43:55.378189 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:43:55.378297 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:43:55.404230 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:43:55.404307 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:43:55.425229 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:43:55.425325 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:43:55.443184 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:43:55.443330 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:43:55.461216 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:43:55.461320 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:43:55.480184 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:43:55.480307 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:43:55.498477 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:43:55.504005 systemd-networkd[752]: eth0: DHCPv6 lease lost Apr 17 23:43:55.516289 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:43:55.543642 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:43:55.543783 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:43:55.564603 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:43:55.564851 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:43:55.574835 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:43:55.574889 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:43:55.596168 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:43:55.621255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:43:55.621345 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:43:55.655342 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:43:55.655422 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:43:55.664412 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:43:55.664482 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:43:55.681394 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:43:55.681471 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:43:55.698538 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:43:55.727715 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:43:56.112110 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 17 23:43:55.727903 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:43:55.753480 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:43:55.753589 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:43:55.774202 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:43:55.774277 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:43:55.794224 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:43:55.794336 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:43:55.824135 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:43:55.824255 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:43:55.851142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:43:55.851277 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:55.886198 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:43:55.909267 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:43:55.909347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:43:55.926512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:43:55.926591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:55.957841 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:43:55.958002 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:43:55.978731 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:43:55.978853 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:43:56.010715 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:43:56.024213 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:43:56.068302 systemd[1]: Switching root. Apr 17 23:43:56.341092 systemd-journald[184]: Journal stopped Apr 17 23:43:45.115630 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:43:45.115684 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:43:45.115702 kernel: BIOS-provided physical RAM map: Apr 17 23:43:45.115716 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 17 23:43:45.115729 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 17 23:43:45.115742 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 17 23:43:45.115758 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 17 23:43:45.115776 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 17 23:43:45.115790 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 17 23:43:45.115804 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 17 23:43:45.115818 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 17 23:43:45.115832 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 17 23:43:45.115847 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 17 23:43:45.115861 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 17 23:43:45.115882 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 17 23:43:45.115898 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 17 23:43:45.115912 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 17 23:43:45.115947 kernel: NX (Execute Disable) protection: active Apr 17 23:43:45.115961 kernel: APIC: Static calls initialized Apr 17 23:43:45.115975 kernel: efi: EFI v2.7 by EDK II Apr 17 23:43:45.115989 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd300018 Apr 17 23:43:45.116003 kernel: SMBIOS 2.4 present. Apr 17 23:43:45.116019 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026 Apr 17 23:43:45.116034 kernel: Hypervisor detected: KVM Apr 17 23:43:45.116055 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:43:45.116072 kernel: kvm-clock: using sched offset of 13286102190 cycles Apr 17 23:43:45.116087 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:43:45.116103 kernel: tsc: Detected 2299.998 MHz processor Apr 17 23:43:45.116117 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:43:45.116132 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:43:45.116148 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 17 23:43:45.116164 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 17 23:43:45.116180 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:43:45.116200 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 17 23:43:45.116216 kernel: Using GB pages for direct mapping Apr 17 23:43:45.116231 kernel: Secure boot disabled Apr 17 23:43:45.116247 kernel: ACPI: Early table checksum verification disabled Apr 17 23:43:45.116263 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 17 23:43:45.116279 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 17 23:43:45.116296 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 17 23:43:45.116318 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 17 23:43:45.116347 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 17 23:43:45.116363 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250807) Apr 17 23:43:45.116380 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 17 23:43:45.116406 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 17 23:43:45.116422 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 17 23:43:45.117091 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 17 23:43:45.117123 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 17 23:43:45.117142 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 17 23:43:45.117160 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 17 23:43:45.117176 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 17 23:43:45.117194 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 17 23:43:45.117212 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 17 23:43:45.117230 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 17 23:43:45.117247 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 17 23:43:45.117265 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 17 23:43:45.117288 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 17 23:43:45.117305 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:43:45.117323 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:43:45.117341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 17 23:43:45.117358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 17 23:43:45.117375 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 17 23:43:45.117393 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 17 23:43:45.117411 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 17 23:43:45.117429 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Apr 17 23:43:45.117452 kernel: Zone ranges: Apr 17 23:43:45.117471 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:43:45.117489 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:43:45.117506 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 17 23:43:45.117524 kernel: Movable zone start for each node Apr 17 23:43:45.117542 kernel: Early memory node ranges Apr 17 23:43:45.117559 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 17 23:43:45.117577 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 17 23:43:45.117594 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 17 23:43:45.117611 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 17 23:43:45.117633 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 17 23:43:45.117649 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 17 23:43:45.117677 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:43:45.117694 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 17 23:43:45.117709 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 17 23:43:45.117727 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 17 23:43:45.117744 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 17 23:43:45.117762 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:43:45.117779 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:43:45.117801 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:43:45.117818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:43:45.117835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:43:45.117851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:43:45.117867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:43:45.117885 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:43:45.117902 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:43:45.117919 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 17 23:43:45.118303 kernel: Booting paravirtualized kernel on KVM Apr 17 23:43:45.118326 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:43:45.118343 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:43:45.118360 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:43:45.118377 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:43:45.118394 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:43:45.118411 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:43:45.118428 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:43:45.118447 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:43:45.118469 kernel: random: crng init done Apr 17 23:43:45.118486 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 17 23:43:45.118504 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:43:45.118521 kernel: Fallback order for Node 0: 0 Apr 17 23:43:45.118540 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 17 23:43:45.118567 kernel: Policy zone: Normal Apr 17 23:43:45.118587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:43:45.118603 kernel: software IO TLB: area num 2. Apr 17 23:43:45.118621 kernel: Memory: 7513256K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347068K reserved, 0K cma-reserved) Apr 17 23:43:45.118642 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:43:45.118659 kernel: Kernel/User page tables isolation: enabled Apr 17 23:43:45.118686 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:43:45.118704 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:43:45.118721 kernel: Dynamic Preempt: voluntary Apr 17 23:43:45.118737 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:43:45.118755 kernel: rcu: RCU event tracing is enabled. Apr 17 23:43:45.118772 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:43:45.118808 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:43:45.118826 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:43:45.118848 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:43:45.118867 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:43:45.118889 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:43:45.118907 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:43:45.118949 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:43:45.118968 kernel: Console: colour dummy device 80x25 Apr 17 23:43:45.118986 kernel: printk: console [ttyS0] enabled Apr 17 23:43:45.119009 kernel: ACPI: Core revision 20230628 Apr 17 23:43:45.119027 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:43:45.119046 kernel: x2apic enabled Apr 17 23:43:45.119064 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:43:45.119082 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 17 23:43:45.119110 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 17 23:43:45.119128 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 17 23:43:45.119146 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 17 23:43:45.119164 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 17 23:43:45.119187 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:43:45.119206 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 17 23:43:45.119225 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 17 23:43:45.119243 kernel: Spectre V2 : Mitigation: IBRS Apr 17 23:43:45.119261 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:43:45.119283 kernel: RETBleed: Mitigation: IBRS Apr 17 23:43:45.119303 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 23:43:45.119320 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 17 23:43:45.119342 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 23:43:45.119361 kernel: MDS: Mitigation: Clear CPU buffers Apr 17 23:43:45.119379 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:43:45.119396 kernel: active return thunk: its_return_thunk Apr 17 23:43:45.119414 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:43:45.119432 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:43:45.119450 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:43:45.119468 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:43:45.119486 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:43:45.120974 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 17 23:43:45.121000 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:43:45.121019 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:43:45.121038 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:43:45.121057 kernel: landlock: Up and running. Apr 17 23:43:45.121075 kernel: SELinux: Initializing. Apr 17 23:43:45.121094 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:43:45.121113 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:43:45.121132 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 17 23:43:45.121157 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:43:45.121176 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:43:45.121194 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:43:45.121213 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 17 23:43:45.121231 kernel: signal: max sigframe size: 1776 Apr 17 23:43:45.121250 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:43:45.121270 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:43:45.121288 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:43:45.121307 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:43:45.121330 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:43:45.121348 kernel: .... node #0, CPUs: #1 Apr 17 23:43:45.121368 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:43:45.121388 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:43:45.121406 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:43:45.121425 kernel: smpboot: Max logical packages: 1 Apr 17 23:43:45.121443 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 17 23:43:45.121462 kernel: devtmpfs: initialized Apr 17 23:43:45.121481 kernel: x86/mm: Memory block size: 128MB Apr 17 23:43:45.121504 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 17 23:43:45.121523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:43:45.121542 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:43:45.121561 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:43:45.121580 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:43:45.121599 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:43:45.121618 kernel: audit: type=2000 audit(1776469423.634:1): state=initialized audit_enabled=0 res=1 Apr 17 23:43:45.121636 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:43:45.121656 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:43:45.121690 kernel: cpuidle: using governor menu Apr 17 23:43:45.121707 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:43:45.121726 kernel: dca service started, version 1.12.1 Apr 17 23:43:45.121745 kernel: PCI: Using configuration type 1 for base access Apr 17 23:43:45.121763 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:43:45.121781 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:43:45.121799 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:43:45.121818 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:43:45.121842 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:43:45.121861 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:43:45.121879 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:43:45.121898 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:43:45.121916 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:43:45.121978 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:43:45.121993 kernel: ACPI: Interpreter enabled Apr 17 23:43:45.122009 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:43:45.122024 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:43:45.122041 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:43:45.122063 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 17 23:43:45.122078 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 17 23:43:45.122094 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:43:45.124007 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:43:45.124251 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:43:45.124455 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:43:45.124480 kernel: PCI host bridge to bus 0000:00 Apr 17 23:43:45.124692 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:43:45.124880 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:43:45.125082 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:43:45.125265 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 17 23:43:45.125448 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:43:45.125656 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:43:45.127065 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 17 23:43:45.127302 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 17 23:43:45.127499 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:43:45.127713 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 17 23:43:45.127911 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 17 23:43:45.129180 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 17 23:43:45.129389 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:43:45.129590 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 17 23:43:45.129791 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 17 23:43:45.131040 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:43:45.131247 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 17 23:43:45.131439 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 17 23:43:45.131466 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:43:45.131485 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:43:45.131510 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:43:45.131529 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:43:45.131549 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:43:45.131568 kernel: iommu: Default domain type: Translated Apr 17 23:43:45.131588 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:43:45.131606 kernel: efivars: Registered efivars operations Apr 17 23:43:45.131625 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:43:45.131643 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:43:45.131685 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 17 23:43:45.131709 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 17 23:43:45.131727 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 17 23:43:45.131745 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 17 23:43:45.131764 kernel: vgaarb: loaded Apr 17 23:43:45.131784 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:43:45.131803 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:43:45.131823 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:43:45.131842 kernel: pnp: PnP ACPI init Apr 17 23:43:45.131862 kernel: pnp: PnP ACPI: found 7 devices Apr 17 23:43:45.131886 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:43:45.131906 kernel: NET: Registered PF_INET protocol family Apr 17 23:43:45.132528 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:43:45.132557 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 17 23:43:45.132577 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:43:45.132598 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:43:45.132618 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 17 23:43:45.132638 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 17 23:43:45.132658 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:43:45.132697 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:43:45.132716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:43:45.132735 kernel: NET: Registered PF_XDP protocol family Apr 17 23:43:45.132972 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:43:45.133168 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:43:45.133340 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:43:45.133519 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 17 23:43:45.133726 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:43:45.133760 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:43:45.133780 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:43:45.133801 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 17 23:43:45.133821 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:43:45.133840 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 17 23:43:45.133860 kernel: clocksource: Switched to clocksource tsc Apr 17 23:43:45.133880 kernel: Initialise system trusted keyrings Apr 17 23:43:45.133900 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 17 23:43:45.133949 kernel: Key type asymmetric registered Apr 17 23:43:45.133966 kernel: Asymmetric key parser 'x509' registered Apr 17 23:43:45.133981 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:43:45.133997 kernel: io scheduler mq-deadline registered Apr 17 23:43:45.134015 kernel: io scheduler kyber registered Apr 17 23:43:45.134034 kernel: io scheduler bfq registered Apr 17 23:43:45.134052 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:43:45.134072 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 17 23:43:45.134280 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 17 23:43:45.134314 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 17 23:43:45.134506 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 17 23:43:45.134531 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 17 23:43:45.134732 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 17 23:43:45.134757 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:43:45.134776 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:43:45.134794 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 17 23:43:45.134811 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 17 23:43:45.134835 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 17 23:43:45.135083 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 17 23:43:45.135112 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:43:45.135131 kernel: i8042: Warning: Keylock active Apr 17 23:43:45.135148 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:43:45.135166 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:43:45.135370 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:43:45.135559 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:43:45.135757 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:43:44 UTC (1776469424) Apr 17 23:43:45.137978 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:43:45.138016 kernel: intel_pstate: CPU model not supported Apr 17 23:43:45.138037 kernel: pstore: Using crash dump compression: deflate Apr 17 23:43:45.138056 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:43:45.138074 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:43:45.138093 kernel: Segment Routing with IPv6 Apr 17 23:43:45.138112 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:43:45.138138 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:43:45.138158 kernel: Key type dns_resolver registered Apr 17 23:43:45.138177 kernel: IPI shorthand broadcast: enabled Apr 17 23:43:45.138196 kernel: sched_clock: Marking stable (888004740, 154276338)->(1089503230, -47222152) Apr 17 23:43:45.138215 kernel: registered taskstats version 1 Apr 17 23:43:45.138235 kernel: Loading compiled-in X.509 certificates Apr 17 23:43:45.138254 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:43:45.138272 kernel: Key type .fscrypt registered Apr 17 23:43:45.138291 kernel: Key type fscrypt-provisioning registered Apr 17 23:43:45.138314 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:43:45.138335 kernel: ima: No architecture policies found Apr 17 23:43:45.138354 kernel: clk: Disabling unused clocks Apr 17 23:43:45.138373 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:43:45.138393 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:43:45.138412 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:43:45.138431 kernel: Run /init as init process Apr 17 23:43:45.138450 kernel: with arguments: Apr 17 23:43:45.138469 kernel: /init Apr 17 23:43:45.138488 kernel: with environment: Apr 17 23:43:45.138512 kernel: HOME=/ Apr 17 23:43:45.138531 kernel: TERM=linux Apr 17 23:43:45.138551 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 17 23:43:45.138574 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:43:45.138598 systemd[1]: Detected virtualization google. Apr 17 23:43:45.138618 systemd[1]: Detected architecture x86-64. Apr 17 23:43:45.138638 systemd[1]: Running in initrd. Apr 17 23:43:45.138672 systemd[1]: No hostname configured, using default hostname. Apr 17 23:43:45.138692 systemd[1]: Hostname set to . Apr 17 23:43:45.138713 systemd[1]: Initializing machine ID from random generator. Apr 17 23:43:45.138732 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:43:45.138753 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:43:45.138773 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:43:45.138795 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:43:45.138816 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:43:45.138841 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:43:45.138861 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:43:45.138885 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:43:45.138904 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:43:45.138948 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:43:45.138971 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:43:45.138998 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:43:45.139017 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:43:45.139059 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:43:45.139085 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:43:45.139111 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:43:45.139132 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:43:45.139155 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:43:45.139181 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:43:45.139203 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:43:45.139225 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:43:45.139245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:43:45.139267 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:43:45.139289 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:43:45.139312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:43:45.139333 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:43:45.139358 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:43:45.139380 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:43:45.139402 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:43:45.139424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:45.139482 systemd-journald[184]: Collecting audit messages is disabled. Apr 17 23:43:45.139534 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:43:45.139555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:43:45.139578 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:43:45.139601 systemd-journald[184]: Journal started Apr 17 23:43:45.139647 systemd-journald[184]: Runtime Journal (/run/log/journal/2287602114b1496fb26a66b49a6a9078) is 8.0M, max 148.7M, 140.7M free. Apr 17 23:43:45.152951 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:43:45.153806 systemd-modules-load[185]: Inserted module 'overlay' Apr 17 23:43:45.159081 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:43:45.165669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:45.168834 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:43:45.193955 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:43:45.195247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:43:45.198547 kernel: Bridge firewalling registered Apr 17 23:43:45.197419 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 17 23:43:45.198970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:43:45.201207 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:43:45.201783 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:43:45.214907 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:43:45.242171 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:45.252512 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:43:45.257451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:43:45.266475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:43:45.277223 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:43:45.281125 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:43:45.321463 dracut-cmdline[215]: dracut-dracut-053 Apr 17 23:43:45.326186 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:43:45.347490 systemd-resolved[217]: Positive Trust Anchors: Apr 17 23:43:45.348106 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:43:45.348333 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:43:45.355633 systemd-resolved[217]: Defaulting to hostname 'linux'. Apr 17 23:43:45.360853 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:43:45.370193 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:43:45.434977 kernel: SCSI subsystem initialized Apr 17 23:43:45.446985 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:43:45.458992 kernel: iscsi: registered transport (tcp) Apr 17 23:43:45.484132 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:43:45.484220 kernel: QLogic iSCSI HBA Driver Apr 17 23:43:45.537855 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:43:45.544155 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:43:45.582962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:43:45.583065 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:43:45.583094 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:43:45.629967 kernel: raid6: avx2x4 gen() 17963 MB/s Apr 17 23:43:45.646968 kernel: raid6: avx2x2 gen() 17648 MB/s Apr 17 23:43:45.664953 kernel: raid6: avx2x1 gen() 13502 MB/s Apr 17 23:43:45.665020 kernel: raid6: using algorithm avx2x4 gen() 17963 MB/s Apr 17 23:43:45.682451 kernel: raid6: .... xor() 6763 MB/s, rmw enabled Apr 17 23:43:45.682517 kernel: raid6: using avx2x2 recovery algorithm Apr 17 23:43:45.705978 kernel: xor: automatically using best checksumming function avx Apr 17 23:43:45.879967 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:43:45.894473 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:43:45.901204 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:43:45.928437 systemd-udevd[400]: Using default interface naming scheme 'v255'. Apr 17 23:43:45.935766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:43:45.944581 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:43:45.978853 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 17 23:43:46.018293 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:43:46.033187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:43:46.128882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:43:46.135153 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:43:46.171790 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:43:46.175055 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:43:46.212114 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:43:46.224215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:43:46.246965 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:43:46.275191 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:43:46.292183 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:43:46.303970 kernel: AES CTR mode by8 optimization enabled Apr 17 23:43:46.304064 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:43:46.315245 kernel: blk-mq: reduced tag depth to 10240 Apr 17 23:43:46.315897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:43:46.316162 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:46.379128 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 17 23:43:46.391427 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:43:46.417062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:43:46.417615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:46.449081 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 17 23:43:46.449442 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 17 23:43:46.449682 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 17 23:43:46.453969 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 17 23:43:46.454467 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:43:46.474150 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:46.529163 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:43:46.529214 kernel: GPT:17805311 != 33554431 Apr 17 23:43:46.529240 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:43:46.529263 kernel: GPT:17805311 != 33554431 Apr 17 23:43:46.529284 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:43:46.529308 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:46.529331 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 17 23:43:46.501365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:46.550858 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:43:46.586992 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (461) Apr 17 23:43:46.604953 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (469) Apr 17 23:43:46.611366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:46.631663 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 17 23:43:46.646225 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 17 23:43:46.670770 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 17 23:43:46.688118 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 17 23:43:46.718619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 17 23:43:46.741138 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:43:46.757490 disk-uuid[542]: Primary Header is updated. Apr 17 23:43:46.757490 disk-uuid[542]: Secondary Entries is updated. Apr 17 23:43:46.757490 disk-uuid[542]: Secondary Header is updated. Apr 17 23:43:46.779163 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:46.775167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:43:46.823607 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:46.823642 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:46.862607 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:47.820054 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:43:47.820138 disk-uuid[543]: The operation has completed successfully. Apr 17 23:43:47.898202 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:43:47.898353 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:43:47.929195 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:43:47.949433 sh[569]: Success Apr 17 23:43:47.972954 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:43:48.068029 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:43:48.075551 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:43:48.100536 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:43:48.155499 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:43:48.155594 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:48.155620 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:43:48.165089 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:43:48.177727 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:43:48.209966 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:43:48.217547 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:43:48.218597 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:43:48.224172 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:43:48.238183 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:43:48.307535 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:48.307626 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:48.307653 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:43:48.325957 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:43:48.326035 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:43:48.341395 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:43:48.359453 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:48.359104 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:43:48.375317 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:43:48.482382 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:43:48.489157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:43:48.592660 systemd-networkd[752]: lo: Link UP Apr 17 23:43:48.592673 systemd-networkd[752]: lo: Gained carrier Apr 17 23:43:48.595979 systemd-networkd[752]: Enumeration completed Apr 17 23:43:48.596150 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:43:48.603208 ignition[660]: Ignition 2.19.0 Apr 17 23:43:48.596742 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:43:48.603218 ignition[660]: Stage: fetch-offline Apr 17 23:43:48.596749 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:43:48.603264 ignition[660]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:48.599877 systemd-networkd[752]: eth0: Link UP Apr 17 23:43:48.603276 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:48.599884 systemd-networkd[752]: eth0: Gained carrier Apr 17 23:43:48.603430 ignition[660]: parsed url from cmdline: "" Apr 17 23:43:48.599900 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:43:48.603437 ignition[660]: no config URL provided Apr 17 23:43:48.612034 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:43:48.603445 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:43:48.612053 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.99/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 17 23:43:48.603458 ignition[660]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:43:48.614552 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:43:48.603467 ignition[660]: failed to fetch config: resource requires networking Apr 17 23:43:48.632865 systemd[1]: Reached target network.target - Network. Apr 17 23:43:48.603679 ignition[660]: Ignition finished successfully Apr 17 23:43:48.655204 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:43:48.703136 ignition[760]: Ignition 2.19.0 Apr 17 23:43:48.716165 unknown[760]: fetched base config from "system" Apr 17 23:43:48.703146 ignition[760]: Stage: fetch Apr 17 23:43:48.716177 unknown[760]: fetched base config from "system" Apr 17 23:43:48.703337 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:48.716186 unknown[760]: fetched user config from "gcp" Apr 17 23:43:48.703350 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:48.719126 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:43:48.703495 ignition[760]: parsed url from cmdline: "" Apr 17 23:43:48.742195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:43:48.703502 ignition[760]: no config URL provided Apr 17 23:43:48.801488 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:43:48.703512 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:43:48.828147 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:43:48.703527 ignition[760]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:43:48.882632 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:43:48.703558 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 17 23:43:48.890751 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:43:48.708466 ignition[760]: GET result: OK Apr 17 23:43:48.917237 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:43:48.708561 ignition[760]: parsing config with SHA512: 4edc912f907114c445cf4874f97e0103f9aaad718a3dbc5258b10e4e8c327b53fd2049df279fc33a45f9d35dc5735972ca70c3e0654e3237e1cdfe29d9dcb453 Apr 17 23:43:48.923320 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:43:48.716753 ignition[760]: fetch: fetch complete Apr 17 23:43:48.940355 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:43:48.716760 ignition[760]: fetch: fetch passed Apr 17 23:43:48.970229 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:43:48.716823 ignition[760]: Ignition finished successfully Apr 17 23:43:48.999240 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:43:48.788987 ignition[766]: Ignition 2.19.0 Apr 17 23:43:48.788998 ignition[766]: Stage: kargs Apr 17 23:43:48.789273 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:48.789287 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:48.790369 ignition[766]: kargs: kargs passed Apr 17 23:43:48.790427 ignition[766]: Ignition finished successfully Apr 17 23:43:48.856483 ignition[771]: Ignition 2.19.0 Apr 17 23:43:48.856496 ignition[771]: Stage: disks Apr 17 23:43:48.856743 ignition[771]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:48.856761 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:48.858467 ignition[771]: disks: disks passed Apr 17 23:43:48.858533 ignition[771]: Ignition finished successfully Apr 17 23:43:49.055845 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 17 23:43:49.222094 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:43:49.229121 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:43:49.383990 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:43:49.384886 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:43:49.393881 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:43:49.418115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:43:49.447108 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:43:49.447999 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:43:49.529145 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (788) Apr 17 23:43:49.529197 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:49.529235 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:49.529259 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:43:49.529282 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:43:49.529305 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:43:49.448095 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:43:49.448140 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:43:49.521912 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:43:49.557817 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:43:49.580187 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:43:49.723245 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:43:49.735670 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:43:49.746098 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:43:49.756088 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:43:49.912141 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:43:49.919164 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:43:49.954982 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:49.961195 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:43:49.971615 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:43:50.021436 ignition[900]: INFO : Ignition 2.19.0 Apr 17 23:43:50.021436 ignition[900]: INFO : Stage: mount Apr 17 23:43:50.021436 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:50.021436 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:50.060283 ignition[900]: INFO : mount: mount passed Apr 17 23:43:50.060283 ignition[900]: INFO : Ignition finished successfully Apr 17 23:43:50.024887 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:43:50.029729 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:43:50.049086 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:43:50.221273 systemd-networkd[752]: eth0: Gained IPv6LL Apr 17 23:43:50.391202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:43:50.440961 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (912) Apr 17 23:43:50.458490 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:50.458602 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:50.458629 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:43:50.481437 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:43:50.481547 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:43:50.485183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:43:50.525741 ignition[929]: INFO : Ignition 2.19.0 Apr 17 23:43:50.525741 ignition[929]: INFO : Stage: files Apr 17 23:43:50.542102 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:50.542102 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:50.542102 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:43:50.542102 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:43:50.542102 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:43:50.542102 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:43:50.542102 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:43:50.542102 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:43:50.540027 unknown[929]: wrote ssh authorized keys file for user: core Apr 17 23:43:50.645156 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:43:50.645156 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:43:50.889054 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:43:51.188662 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:43:51.188662 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:43:51.221112 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:43:51.613836 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:43:54.124331 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:43:54.124331 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:43:54.163089 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:43:54.163089 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:43:54.163089 ignition[929]: INFO : files: files passed Apr 17 23:43:54.163089 ignition[929]: INFO : Ignition finished successfully Apr 17 23:43:54.128826 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:43:54.149183 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:43:54.196187 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:43:54.207667 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:43:54.375141 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:43:54.375141 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:43:54.207796 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:43:54.413271 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:43:54.271114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:43:54.277467 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:43:54.317175 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:43:54.401687 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:43:54.401812 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:43:54.424966 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:43:54.448124 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:43:54.469380 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:43:54.475177 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:43:54.538776 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:43:54.560170 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:43:54.603546 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:43:54.612500 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:43:54.652311 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:43:54.652734 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:43:54.652950 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:43:54.686530 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:43:54.714277 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:43:54.714704 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:43:54.730548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:43:54.768305 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:43:54.768724 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:43:54.786528 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:43:54.803538 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:43:54.841356 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:43:54.858280 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:43:54.858638 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:43:54.859112 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:43:54.899423 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:43:54.899814 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:43:54.917438 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:43:54.917598 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:43:54.937434 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:43:54.937630 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:43:54.978495 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:43:54.978726 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:43:55.007504 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:43:55.007648 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:43:55.024230 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:43:55.065319 ignition[981]: INFO : Ignition 2.19.0 Apr 17 23:43:55.065319 ignition[981]: INFO : Stage: umount Apr 17 23:43:55.065319 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:55.065319 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 17 23:43:55.065319 ignition[981]: INFO : umount: umount passed Apr 17 23:43:55.065319 ignition[981]: INFO : Ignition finished successfully Apr 17 23:43:55.073105 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:43:55.073405 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:43:55.097260 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:43:55.106088 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:43:55.106478 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:43:55.156448 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:43:55.156639 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:43:55.192024 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:43:55.193072 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:43:55.193194 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:43:55.207815 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:43:55.207959 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:43:55.229377 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:43:55.229504 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:43:55.250537 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:43:55.250600 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:43:55.268204 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:43:55.268322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:43:55.288254 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:43:55.288348 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:43:55.307215 systemd[1]: Stopped target network.target - Network. Apr 17 23:43:55.323161 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:43:55.323290 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:43:55.341378 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:43:55.359136 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:43:55.363107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:43:55.378189 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:43:55.378297 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:43:55.404230 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:43:55.404307 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:43:55.425229 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:43:55.425325 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:43:55.443184 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:43:55.443330 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:43:55.461216 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:43:55.461320 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:43:55.480184 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:43:55.480307 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:43:55.498477 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:43:55.504005 systemd-networkd[752]: eth0: DHCPv6 lease lost Apr 17 23:43:55.516289 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:43:55.543642 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:43:55.543783 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:43:55.564603 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:43:55.564851 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:43:55.574835 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:43:55.574889 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:43:55.596168 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:43:55.621255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:43:55.621345 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:43:55.655342 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:43:55.655422 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:43:55.664412 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:43:55.664482 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:43:55.681394 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:43:55.681471 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:43:55.698538 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:43:55.727715 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:43:56.112110 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 17 23:43:55.727903 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:43:55.753480 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:43:55.753589 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:43:55.774202 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:43:55.774277 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:43:55.794224 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:43:55.794336 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:43:55.824135 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:43:55.824255 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:43:55.851142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:43:55.851277 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:55.886198 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:43:55.909267 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:43:55.909347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:43:55.926512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:43:55.926591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:55.957841 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:43:55.958002 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:43:55.978731 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:43:55.978853 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:43:56.010715 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:43:56.024213 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:43:56.068302 systemd[1]: Switching root. Apr 17 23:43:56.341092 systemd-journald[184]: Journal stopped Apr 17 23:43:58.919107 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:43:58.919156 kernel: SELinux: policy capability open_perms=1 Apr 17 23:43:58.919178 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:43:58.919197 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:43:58.919214 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:43:58.919232 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:43:58.919253 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:43:58.919276 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:43:58.919295 kernel: audit: type=1403 audit(1776469436.755:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:43:58.919317 systemd[1]: Successfully loaded SELinux policy in 83.456ms. Apr 17 23:43:58.919339 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.467ms. Apr 17 23:43:58.919361 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:43:58.919382 systemd[1]: Detected virtualization google. Apr 17 23:43:58.919403 systemd[1]: Detected architecture x86-64. Apr 17 23:43:58.919428 systemd[1]: Detected first boot. Apr 17 23:43:58.919450 systemd[1]: Initializing machine ID from random generator. Apr 17 23:43:58.919471 zram_generator::config[1022]: No configuration found. Apr 17 23:43:58.919494 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:43:58.919515 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:43:58.919541 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:43:58.919562 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:43:58.919585 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:43:58.919606 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:43:58.919627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:43:58.919650 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:43:58.919672 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:43:58.919697 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:43:58.919719 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:43:58.919741 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:43:58.919763 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:43:58.919785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:43:58.919814 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:43:58.919837 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:43:58.919859 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:43:58.919885 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:43:58.919907 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:43:58.919946 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:43:58.919968 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:43:58.919990 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:43:58.920011 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:43:58.920041 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:43:58.920063 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:43:58.920087 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:43:58.920113 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:43:58.920136 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:43:58.920158 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:43:58.920180 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:43:58.920203 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:43:58.920225 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:43:58.920247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:43:58.920275 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:43:58.920298 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:43:58.920321 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:43:58.920343 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:43:58.920366 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:43:58.920393 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:43:58.920416 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:43:58.920438 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:43:58.920461 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:43:58.920484 systemd[1]: Reached target machines.target - Containers. Apr 17 23:43:58.920507 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:43:58.920530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:43:58.920553 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:43:58.920582 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:43:58.920605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:43:58.920628 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:43:58.920651 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:43:58.920673 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:43:58.920696 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:43:58.920718 kernel: fuse: init (API version 7.39) Apr 17 23:43:58.920739 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:43:58.920767 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:43:58.920790 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:43:58.920818 kernel: ACPI: bus type drm_connector registered Apr 17 23:43:58.920839 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:43:58.920861 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:43:58.920883 kernel: loop: module loaded Apr 17 23:43:58.920904 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:43:58.920938 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:43:58.920992 systemd-journald[1109]: Collecting audit messages is disabled. Apr 17 23:43:58.921041 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:43:58.921064 systemd-journald[1109]: Journal started Apr 17 23:43:58.921111 systemd-journald[1109]: Runtime Journal (/run/log/journal/a4ddc12246124a67ae57c5ecbb46b053) is 8.0M, max 148.7M, 140.7M free. Apr 17 23:43:57.683189 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:43:57.705897 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 23:43:57.706491 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:43:58.953954 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:43:58.978960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:43:58.993846 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:43:58.993949 systemd[1]: Stopped verity-setup.service. Apr 17 23:43:59.024969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:43:59.036977 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:43:59.047567 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:43:59.057368 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:43:59.067346 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:43:59.077367 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:43:59.087349 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:43:59.097334 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:43:59.107627 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:43:59.119599 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:43:59.131599 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:43:59.132078 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:43:59.143536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:43:59.143818 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:43:59.155522 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:43:59.155774 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:43:59.166514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:43:59.166785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:43:59.178507 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:43:59.178799 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:43:59.189524 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:43:59.189795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:43:59.200510 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:43:59.211449 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:43:59.223455 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:43:59.235524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:43:59.261620 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:43:59.283151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:43:59.295630 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:43:59.306133 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:43:59.306206 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:43:59.319105 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:43:59.339190 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:43:59.356206 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:43:59.366377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:43:59.375491 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:43:59.392299 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:43:59.401336 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:43:59.410227 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:43:59.419111 systemd-journald[1109]: Time spent on flushing to /var/log/journal/a4ddc12246124a67ae57c5ecbb46b053 is 102.893ms for 926 entries. Apr 17 23:43:59.419111 systemd-journald[1109]: System Journal (/var/log/journal/a4ddc12246124a67ae57c5ecbb46b053) is 8.0M, max 584.8M, 576.8M free. Apr 17 23:43:59.571299 systemd-journald[1109]: Received client request to flush runtime journal. Apr 17 23:43:59.571378 kernel: loop0: detected capacity change from 0 to 228704 Apr 17 23:43:59.431633 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:43:59.442995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:43:59.464254 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:43:59.480629 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:43:59.504248 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:43:59.519514 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:43:59.532370 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:43:59.544505 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:43:59.561610 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:43:59.577180 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:43:59.599899 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:43:59.614149 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:43:59.631411 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:43:59.652343 udevadm[1142]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:43:59.685145 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:43:59.705218 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:43:59.707454 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:43:59.733774 kernel: loop1: detected capacity change from 0 to 54824 Apr 17 23:43:59.734631 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:43:59.751648 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:43:59.851000 kernel: loop2: detected capacity change from 0 to 142488 Apr 17 23:43:59.849744 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Apr 17 23:43:59.849782 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Apr 17 23:43:59.864440 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:43:59.991970 kernel: loop3: detected capacity change from 0 to 140768 Apr 17 23:44:00.089452 kernel: loop4: detected capacity change from 0 to 228704 Apr 17 23:44:00.144998 kernel: loop5: detected capacity change from 0 to 54824 Apr 17 23:44:00.190485 kernel: loop6: detected capacity change from 0 to 142488 Apr 17 23:44:00.244968 kernel: loop7: detected capacity change from 0 to 140768 Apr 17 23:44:00.295761 (sd-merge)[1164]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Apr 17 23:44:00.299593 (sd-merge)[1164]: Merged extensions into '/usr'. Apr 17 23:44:00.306604 systemd[1]: Reloading requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:44:00.307087 systemd[1]: Reloading... Apr 17 23:44:00.471965 zram_generator::config[1186]: No configuration found. Apr 17 23:44:00.626967 ldconfig[1135]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:44:00.729947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:00.816356 systemd[1]: Reloading finished in 508 ms. Apr 17 23:44:00.849479 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:44:00.859540 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:44:00.871579 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:44:00.894202 systemd[1]: Starting ensure-sysext.service... Apr 17 23:44:00.913265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:44:00.936202 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:44:00.952108 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:44:00.952130 systemd[1]: Reloading... Apr 17 23:44:00.972884 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:44:00.975347 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:44:00.979038 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:44:00.979623 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 17 23:44:00.979761 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 17 23:44:00.991283 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:44:00.991302 systemd-tmpfiles[1232]: Skipping /boot Apr 17 23:44:01.020668 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:44:01.020693 systemd-tmpfiles[1232]: Skipping /boot Apr 17 23:44:01.040796 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Apr 17 23:44:01.118965 zram_generator::config[1259]: No configuration found. Apr 17 23:44:01.416434 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1268) Apr 17 23:44:01.429982 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 17 23:44:01.459981 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 23:44:01.468170 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:44:01.483078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:01.517964 kernel: EDAC MC: Ver: 3.0.0 Apr 17 23:44:01.530062 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 17 23:44:01.544020 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Apr 17 23:44:01.562963 kernel: ACPI: button: Sleep Button [SLPF] Apr 17 23:44:01.680000 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:44:01.680108 systemd[1]: Reloading finished in 727 ms. Apr 17 23:44:01.705067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:44:01.715985 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:44:01.729683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:44:01.768420 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:44:01.806511 systemd[1]: Finished ensure-sysext.service. Apr 17 23:44:01.823421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 17 23:44:01.835340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:01.841199 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:44:01.856766 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:44:01.870965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:44:01.879195 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:44:01.896203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:44:01.916411 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:44:01.927880 lvm[1343]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:44:01.934200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:44:01.953279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:44:01.961333 augenrules[1356]: No rules Apr 17 23:44:01.971057 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:44:01.981228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:44:01.988940 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:44:02.009638 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:44:02.031283 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:44:02.050739 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:44:02.059099 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:44:02.076250 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:44:02.094243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:02.104118 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:02.106595 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:44:02.117591 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:44:02.129620 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:44:02.130851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:44:02.131241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:44:02.131686 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:44:02.131997 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:44:02.132401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:44:02.132870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:44:02.133478 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:44:02.134302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:44:02.139738 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:44:02.140259 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:44:02.151422 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:44:02.159452 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:44:02.167222 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:44:02.169654 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Apr 17 23:44:02.169770 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:44:02.169892 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:44:02.176213 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:44:02.186238 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:44:02.186341 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:44:02.188075 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:44:02.190342 lvm[1384]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:44:02.246557 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:44:02.253780 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:44:02.285378 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Apr 17 23:44:02.298548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:02.315180 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:44:02.397523 systemd-networkd[1365]: lo: Link UP Apr 17 23:44:02.397540 systemd-networkd[1365]: lo: Gained carrier Apr 17 23:44:02.399557 systemd-networkd[1365]: Enumeration completed Apr 17 23:44:02.400032 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:44:02.400911 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:02.400918 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:44:02.402007 systemd-networkd[1365]: eth0: Link UP Apr 17 23:44:02.402023 systemd-networkd[1365]: eth0: Gained carrier Apr 17 23:44:02.402050 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:02.415040 systemd-networkd[1365]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:44:02.415073 systemd-networkd[1365]: eth0: DHCPv4 address 10.128.0.99/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 17 23:44:02.418253 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:44:02.419783 systemd-resolved[1366]: Positive Trust Anchors: Apr 17 23:44:02.419803 systemd-resolved[1366]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:44:02.420106 systemd-resolved[1366]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:44:02.427209 systemd-resolved[1366]: Defaulting to hostname 'linux'. Apr 17 23:44:02.429896 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:44:02.440395 systemd[1]: Reached target network.target - Network. Apr 17 23:44:02.449118 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:44:02.460147 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:44:02.470285 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:44:02.481233 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:44:02.493403 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:44:02.503326 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:44:02.515160 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:44:02.526151 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:44:02.526218 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:44:02.535153 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:44:02.544851 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:44:02.557056 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:44:02.570060 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:44:02.581016 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:44:02.591335 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:44:02.601134 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:44:02.610208 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:44:02.610265 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:44:02.620160 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:44:02.632019 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:44:02.653353 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:44:02.689863 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:44:02.710831 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:44:02.718627 jq[1417]: false Apr 17 23:44:02.721092 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:44:02.730187 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:44:02.747230 coreos-metadata[1415]: Apr 17 23:44:02.747 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Apr 17 23:44:02.750575 coreos-metadata[1415]: Apr 17 23:44:02.748 INFO Fetch successful Apr 17 23:44:02.750575 coreos-metadata[1415]: Apr 17 23:44:02.748 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Apr 17 23:44:02.750575 coreos-metadata[1415]: Apr 17 23:44:02.749 INFO Fetch successful Apr 17 23:44:02.750575 coreos-metadata[1415]: Apr 17 23:44:02.749 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Apr 17 23:44:02.750575 coreos-metadata[1415]: Apr 17 23:44:02.750 INFO Fetch successful Apr 17 23:44:02.750575 coreos-metadata[1415]: Apr 17 23:44:02.750 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Apr 17 23:44:02.750514 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:44:02.757314 coreos-metadata[1415]: Apr 17 23:44:02.755 INFO Fetch successful Apr 17 23:44:02.766848 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:44:02.786396 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:44:02.798998 extend-filesystems[1420]: Found loop4 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found loop5 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found loop6 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found loop7 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found sda Apr 17 23:44:02.798998 extend-filesystems[1420]: Found sda1 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found sda2 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found sda3 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found usr Apr 17 23:44:02.798998 extend-filesystems[1420]: Found sda4 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found sda6 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found sda7 Apr 17 23:44:02.798998 extend-filesystems[1420]: Found sda9 Apr 17 23:44:02.798998 extend-filesystems[1420]: Checking size of /dev/sda9 Apr 17 23:44:02.957230 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Apr 17 23:44:02.957300 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1262) Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: ---------------------------------------------------- Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: corporation. Support and training for ntp-4 are Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: available at https://www.nwtime.org/support Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: ---------------------------------------------------- Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: proto: precision = 0.087 usec (-23) Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: basedate set to 2026-04-05 Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: gps base set to 2026-04-05 (week 2413) Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: Listen normally on 3 eth0 10.128.0.99:123 Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: Listen normally on 4 lo [::1]:123 Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: bind(21) AF_INET6 fe80::4001:aff:fe80:63%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:63%2#123 Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: failed to init interface for address fe80::4001:aff:fe80:63%2 Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: Listening on routing socket on fd #21 for interface updates Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:44:02.957485 ntpd[1423]: 17 Apr 23:44:02 ntpd[1423]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:44:02.806235 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:44:02.826283 dbus-daemon[1416]: [system] SELinux support is enabled Apr 17 23:44:02.995988 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Apr 17 23:44:02.996067 extend-filesystems[1420]: Resized partition /dev/sda9 Apr 17 23:44:02.857243 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:44:02.834390 dbus-daemon[1416]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1365 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:44:03.006206 extend-filesystems[1438]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:44:02.895875 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Apr 17 23:44:02.837333 ntpd[1423]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:44:03.036872 extend-filesystems[1438]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 17 23:44:03.036872 extend-filesystems[1438]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:44:03.036872 extend-filesystems[1438]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Apr 17 23:44:02.896746 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:44:02.837366 ntpd[1423]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:44:03.070687 extend-filesystems[1420]: Resized filesystem in /dev/sda9 Apr 17 23:44:03.103134 update_engine[1444]: I20260417 23:44:03.041975 1444 main.cc:92] Flatcar Update Engine starting Apr 17 23:44:03.103134 update_engine[1444]: I20260417 23:44:03.047526 1444 update_check_scheduler.cc:74] Next update check in 11m26s Apr 17 23:44:02.906298 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:44:02.837382 ntpd[1423]: ---------------------------------------------------- Apr 17 23:44:03.103986 jq[1447]: true Apr 17 23:44:02.941419 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:44:02.837398 ntpd[1423]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:44:02.976244 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:44:02.837411 ntpd[1423]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:44:03.004239 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:44:02.837426 ntpd[1423]: corporation. Support and training for ntp-4 are Apr 17 23:44:03.004521 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:44:02.837440 ntpd[1423]: available at https://www.nwtime.org/support Apr 17 23:44:03.005167 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:44:02.837454 ntpd[1423]: ---------------------------------------------------- Apr 17 23:44:03.006012 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:44:02.841332 ntpd[1423]: proto: precision = 0.087 usec (-23) Apr 17 23:44:03.016670 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:44:02.853466 ntpd[1423]: basedate set to 2026-04-05 Apr 17 23:44:03.018003 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:44:02.853492 ntpd[1423]: gps base set to 2026-04-05 (week 2413) Apr 17 23:44:03.034629 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:44:02.858462 ntpd[1423]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:44:03.034948 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:44:02.858544 ntpd[1423]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:44:03.054744 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:44:02.861045 ntpd[1423]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:44:03.054777 systemd-logind[1441]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 17 23:44:02.861124 ntpd[1423]: Listen normally on 3 eth0 10.128.0.99:123 Apr 17 23:44:03.055133 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:44:02.861192 ntpd[1423]: Listen normally on 4 lo [::1]:123 Apr 17 23:44:03.056894 systemd-logind[1441]: New seat seat0. Apr 17 23:44:02.861272 ntpd[1423]: bind(21) AF_INET6 fe80::4001:aff:fe80:63%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:44:03.081590 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:44:02.861304 ntpd[1423]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:63%2#123 Apr 17 23:44:02.861326 ntpd[1423]: failed to init interface for address fe80::4001:aff:fe80:63%2 Apr 17 23:44:02.861379 ntpd[1423]: Listening on routing socket on fd #21 for interface updates Apr 17 23:44:02.866826 ntpd[1423]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:44:02.866867 ntpd[1423]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:44:03.144845 dbus-daemon[1416]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:44:03.165584 jq[1453]: true Apr 17 23:44:03.168533 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:44:03.182783 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:44:03.218556 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:44:03.220543 tar[1452]: linux-amd64/LICENSE Apr 17 23:44:03.220543 tar[1452]: linux-amd64/helm Apr 17 23:44:03.236969 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:44:03.237520 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:44:03.237787 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:44:03.258334 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:44:03.268181 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:44:03.268478 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:44:03.292415 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:44:03.299901 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:44:03.313094 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:44:03.348745 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:44:03.370276 systemd[1]: Starting sshkeys.service... Apr 17 23:44:03.437504 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:44:03.461672 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:44:03.652233 coreos-metadata[1489]: Apr 17 23:44:03.651 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Apr 17 23:44:03.654555 coreos-metadata[1489]: Apr 17 23:44:03.653 INFO Fetch failed with 404: resource not found Apr 17 23:44:03.654555 coreos-metadata[1489]: Apr 17 23:44:03.653 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Apr 17 23:44:03.654555 coreos-metadata[1489]: Apr 17 23:44:03.653 INFO Fetch successful Apr 17 23:44:03.654555 coreos-metadata[1489]: Apr 17 23:44:03.653 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Apr 17 23:44:03.661274 coreos-metadata[1489]: Apr 17 23:44:03.656 INFO Fetch failed with 404: resource not found Apr 17 23:44:03.661274 coreos-metadata[1489]: Apr 17 23:44:03.657 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Apr 17 23:44:03.661274 coreos-metadata[1489]: Apr 17 23:44:03.657 INFO Fetch failed with 404: resource not found Apr 17 23:44:03.661274 coreos-metadata[1489]: Apr 17 23:44:03.657 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Apr 17 23:44:03.661274 coreos-metadata[1489]: Apr 17 23:44:03.659 INFO Fetch successful Apr 17 23:44:03.663309 unknown[1489]: wrote ssh authorized keys file for user: core Apr 17 23:44:03.667827 dbus-daemon[1416]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:44:03.668298 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:44:03.674097 dbus-daemon[1416]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1480 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:44:03.702414 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:44:03.722811 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:44:03.731439 update-ssh-keys[1501]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:44:03.731284 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:44:03.753168 systemd[1]: Finished sshkeys.service. Apr 17 23:44:03.829503 polkitd[1503]: Started polkitd version 121 Apr 17 23:44:03.840267 ntpd[1423]: bind(24) AF_INET6 fe80::4001:aff:fe80:63%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:44:03.840907 ntpd[1423]: 17 Apr 23:44:03 ntpd[1423]: bind(24) AF_INET6 fe80::4001:aff:fe80:63%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:44:03.840907 ntpd[1423]: 17 Apr 23:44:03 ntpd[1423]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:63%2#123 Apr 17 23:44:03.840907 ntpd[1423]: 17 Apr 23:44:03 ntpd[1423]: failed to init interface for address fe80::4001:aff:fe80:63%2 Apr 17 23:44:03.840321 ntpd[1423]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:63%2#123 Apr 17 23:44:03.840343 ntpd[1423]: failed to init interface for address fe80::4001:aff:fe80:63%2 Apr 17 23:44:03.853169 systemd-networkd[1365]: eth0: Gained IPv6LL Apr 17 23:44:03.863193 polkitd[1503]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:44:03.865542 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:44:03.863290 polkitd[1503]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:44:03.863967 polkitd[1503]: Finished loading, compiling and executing 2 rules Apr 17 23:44:03.867029 dbus-daemon[1416]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:44:03.867797 polkitd[1503]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:44:03.878248 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:44:03.886537 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:44:03.906217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:03.926050 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:44:03.942358 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Apr 17 23:44:03.972627 systemd-hostnamed[1480]: Hostname set to (transient) Apr 17 23:44:03.973799 systemd-resolved[1366]: System hostname changed to 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18'. Apr 17 23:44:03.991328 init.sh[1516]: + '[' -e /etc/default/instance_configs.cfg.template ']' Apr 17 23:44:03.991765 init.sh[1516]: + echo -e '[InstanceSetup]\nset_host_keys = false' Apr 17 23:44:03.991765 init.sh[1516]: + /usr/bin/google_instance_setup Apr 17 23:44:04.037502 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:44:04.100762 containerd[1454]: time="2026-04-17T23:44:04.097893307Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:44:04.214504 containerd[1454]: time="2026-04-17T23:44:04.214115625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:04.224348 containerd[1454]: time="2026-04-17T23:44:04.224141271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:04.224348 containerd[1454]: time="2026-04-17T23:44:04.224200825Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:44:04.224348 containerd[1454]: time="2026-04-17T23:44:04.224231639Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:44:04.224607 containerd[1454]: time="2026-04-17T23:44:04.224478073Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:44:04.224607 containerd[1454]: time="2026-04-17T23:44:04.224509669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:04.224711 containerd[1454]: time="2026-04-17T23:44:04.224621111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:04.224711 containerd[1454]: time="2026-04-17T23:44:04.224657241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:04.226542 containerd[1454]: time="2026-04-17T23:44:04.226492233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:04.226542 containerd[1454]: time="2026-04-17T23:44:04.226542383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:04.226704 containerd[1454]: time="2026-04-17T23:44:04.226569438Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:04.226704 containerd[1454]: time="2026-04-17T23:44:04.226586958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:04.226829 containerd[1454]: time="2026-04-17T23:44:04.226794251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:04.229357 containerd[1454]: time="2026-04-17T23:44:04.228857161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:04.229357 containerd[1454]: time="2026-04-17T23:44:04.229103782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:04.229357 containerd[1454]: time="2026-04-17T23:44:04.229131824Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:44:04.229357 containerd[1454]: time="2026-04-17T23:44:04.229259406Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:44:04.229357 containerd[1454]: time="2026-04-17T23:44:04.229332360Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:44:04.247086 containerd[1454]: time="2026-04-17T23:44:04.246328469Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:44:04.247086 containerd[1454]: time="2026-04-17T23:44:04.246417015Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:44:04.247086 containerd[1454]: time="2026-04-17T23:44:04.246449015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:44:04.247086 containerd[1454]: time="2026-04-17T23:44:04.246475113Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:44:04.247086 containerd[1454]: time="2026-04-17T23:44:04.246499585Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:44:04.247086 containerd[1454]: time="2026-04-17T23:44:04.246734053Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247137782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247301497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247326034Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247348373Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247372055Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247393584Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247414588Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247438207Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247476644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247500906Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247522073Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247542109Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247573076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248289 containerd[1454]: time="2026-04-17T23:44:04.247596526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247617558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247641316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247670795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247697787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247718178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247740667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247762635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247786183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247805835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247825564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247855287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.248906 containerd[1454]: time="2026-04-17T23:44:04.247882264Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.247914444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.249992423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.250047805Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.250146761Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.250215134Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.250238382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.250259501Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.250277054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.252116045Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.253090179Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:44:04.253732 containerd[1454]: time="2026-04-17T23:44:04.253119913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:44:04.258586 containerd[1454]: time="2026-04-17T23:44:04.255573516Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:44:04.258586 containerd[1454]: time="2026-04-17T23:44:04.255739064Z" level=info msg="Connect containerd service" Apr 17 23:44:04.258586 containerd[1454]: time="2026-04-17T23:44:04.257327442Z" level=info msg="using legacy CRI server" Apr 17 23:44:04.258586 containerd[1454]: time="2026-04-17T23:44:04.257374346Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:44:04.260029 containerd[1454]: time="2026-04-17T23:44:04.259162721Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:44:04.263944 containerd[1454]: time="2026-04-17T23:44:04.262757221Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:44:04.263944 containerd[1454]: time="2026-04-17T23:44:04.263246053Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:44:04.263944 containerd[1454]: time="2026-04-17T23:44:04.263311052Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:44:04.263944 containerd[1454]: time="2026-04-17T23:44:04.263360338Z" level=info msg="Start subscribing containerd event" Apr 17 23:44:04.263944 containerd[1454]: time="2026-04-17T23:44:04.263435196Z" level=info msg="Start recovering state" Apr 17 23:44:04.263944 containerd[1454]: time="2026-04-17T23:44:04.263526626Z" level=info msg="Start event monitor" Apr 17 23:44:04.263944 containerd[1454]: time="2026-04-17T23:44:04.263556627Z" level=info msg="Start snapshots syncer" Apr 17 23:44:04.263944 containerd[1454]: time="2026-04-17T23:44:04.263571531Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:44:04.263944 containerd[1454]: time="2026-04-17T23:44:04.263582991Z" level=info msg="Start streaming server" Apr 17 23:44:04.263796 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:44:04.275142 containerd[1454]: time="2026-04-17T23:44:04.274995981Z" level=info msg="containerd successfully booted in 0.182434s" Apr 17 23:44:04.630831 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:44:04.679214 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:44:04.700386 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:44:04.723982 systemd[1]: Started sshd@0-10.128.0.99:22-50.85.169.122:36206.service - OpenSSH per-connection server daemon (50.85.169.122:36206). Apr 17 23:44:04.751073 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:44:04.752004 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:44:04.774548 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:44:04.815214 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:44:04.835478 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:44:04.852465 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:44:04.862622 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:44:04.899978 tar[1452]: linux-amd64/README.md Apr 17 23:44:04.933162 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:44:05.041658 instance-setup[1523]: INFO Running google_set_multiqueue. Apr 17 23:44:05.060431 instance-setup[1523]: INFO Set channels for eth0 to 2. Apr 17 23:44:05.066041 instance-setup[1523]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Apr 17 23:44:05.070378 instance-setup[1523]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Apr 17 23:44:05.070460 instance-setup[1523]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Apr 17 23:44:05.070524 instance-setup[1523]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Apr 17 23:44:05.071102 instance-setup[1523]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Apr 17 23:44:05.073459 instance-setup[1523]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Apr 17 23:44:05.073517 instance-setup[1523]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Apr 17 23:44:05.076064 instance-setup[1523]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Apr 17 23:44:05.084984 instance-setup[1523]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 17 23:44:05.090181 instance-setup[1523]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 17 23:44:05.092656 instance-setup[1523]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Apr 17 23:44:05.092807 instance-setup[1523]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Apr 17 23:44:05.115687 init.sh[1516]: + /usr/bin/google_metadata_script_runner --script-type startup Apr 17 23:44:05.288549 startup-script[1582]: INFO Starting startup scripts. Apr 17 23:44:05.294610 startup-script[1582]: INFO No startup scripts found in metadata. Apr 17 23:44:05.294712 startup-script[1582]: INFO Finished running startup scripts. Apr 17 23:44:05.320659 init.sh[1516]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Apr 17 23:44:05.320659 init.sh[1516]: + daemon_pids=() Apr 17 23:44:05.320872 init.sh[1516]: + for d in accounts clock_skew network Apr 17 23:44:05.321548 init.sh[1516]: + daemon_pids+=($!) Apr 17 23:44:05.321548 init.sh[1516]: + for d in accounts clock_skew network Apr 17 23:44:05.321699 init.sh[1585]: + /usr/bin/google_accounts_daemon Apr 17 23:44:05.322107 init.sh[1516]: + daemon_pids+=($!) Apr 17 23:44:05.322107 init.sh[1516]: + for d in accounts clock_skew network Apr 17 23:44:05.322107 init.sh[1516]: + daemon_pids+=($!) Apr 17 23:44:05.322107 init.sh[1516]: + NOTIFY_SOCKET=/run/systemd/notify Apr 17 23:44:05.322107 init.sh[1516]: + /usr/bin/systemd-notify --ready Apr 17 23:44:05.322324 init.sh[1586]: + /usr/bin/google_clock_skew_daemon Apr 17 23:44:05.322617 init.sh[1587]: + /usr/bin/google_network_daemon Apr 17 23:44:05.336349 systemd[1]: Started oem-gce.service - GCE Linux Agent. Apr 17 23:44:05.355815 init.sh[1516]: + wait -n 1585 1586 1587 Apr 17 23:44:05.486596 sshd[1540]: Accepted publickey for core from 50.85.169.122 port 36206 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:44:05.492671 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:05.517395 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:44:05.536351 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:44:05.556047 systemd-logind[1441]: New session 1 of user core. Apr 17 23:44:05.591071 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:44:05.617088 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:44:05.655993 (systemd)[1593]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:44:05.837541 google-clock-skew[1586]: INFO Starting Google Clock Skew daemon. Apr 17 23:44:05.857566 google-clock-skew[1586]: INFO Clock drift token has changed: 0. Apr 17 23:44:05.891664 google-networking[1587]: INFO Starting Google Networking daemon. Apr 17 23:44:05.947709 groupadd[1604]: group added to /etc/group: name=google-sudoers, GID=1000 Apr 17 23:44:05.954832 groupadd[1604]: group added to /etc/gshadow: name=google-sudoers Apr 17 23:44:05.963437 systemd[1593]: Queued start job for default target default.target. Apr 17 23:44:05.968784 systemd[1593]: Created slice app.slice - User Application Slice. Apr 17 23:44:05.968834 systemd[1593]: Reached target paths.target - Paths. Apr 17 23:44:05.968860 systemd[1593]: Reached target timers.target - Timers. Apr 17 23:44:05.972136 systemd[1593]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:44:05.999673 systemd[1593]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:44:05.999866 systemd[1593]: Reached target sockets.target - Sockets. Apr 17 23:44:05.999893 systemd[1593]: Reached target basic.target - Basic System. Apr 17 23:44:05.999988 systemd[1593]: Reached target default.target - Main User Target. Apr 17 23:44:06.000374 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:44:06.000863 systemd[1593]: Startup finished in 323ms. Apr 17 23:44:06.017195 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:44:06.043655 groupadd[1604]: new group: name=google-sudoers, GID=1000 Apr 17 23:44:06.075586 google-accounts[1585]: INFO Starting Google Accounts daemon. Apr 17 23:44:06.001211 systemd-resolved[1366]: Clock change detected. Flushing caches. Apr 17 23:44:06.027247 systemd-journald[1109]: Time jumped backwards, rotating. Apr 17 23:44:06.001497 google-clock-skew[1586]: INFO Synced system time with hardware clock. Apr 17 23:44:06.027438 init.sh[1616]: useradd: invalid user name '0': use --badname to ignore Apr 17 23:44:06.003196 google-accounts[1585]: WARNING OS Login not installed. Apr 17 23:44:06.010465 google-accounts[1585]: INFO Creating a new user account for 0. Apr 17 23:44:06.015346 google-accounts[1585]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Apr 17 23:44:06.221108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:06.233946 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:44:06.241599 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:44:06.244496 systemd[1]: Startup finished in 1.062s (kernel) + 11.976s (initrd) + 9.659s (userspace) = 22.698s. Apr 17 23:44:06.434271 systemd[1]: Started sshd@1-10.128.0.99:22-50.85.169.122:36220.service - OpenSSH per-connection server daemon (50.85.169.122:36220). Apr 17 23:44:06.748734 ntpd[1423]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:63%2]:123 Apr 17 23:44:06.749237 ntpd[1423]: 17 Apr 23:44:06 ntpd[1423]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:63%2]:123 Apr 17 23:44:07.119054 sshd[1631]: Accepted publickey for core from 50.85.169.122 port 36220 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:44:07.121182 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:07.129784 systemd-logind[1441]: New session 2 of user core. Apr 17 23:44:07.134001 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:44:07.259099 kubelet[1625]: E0417 23:44:07.259005 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:44:07.262435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:44:07.262687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:44:07.263207 systemd[1]: kubelet.service: Consumed 1.276s CPU time. Apr 17 23:44:07.591740 sshd[1631]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:07.597889 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:44:07.599005 systemd[1]: sshd@1-10.128.0.99:22-50.85.169.122:36220.service: Deactivated successfully. Apr 17 23:44:07.601359 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:44:07.602706 systemd-logind[1441]: Removed session 2. Apr 17 23:44:07.713148 systemd[1]: Started sshd@2-10.128.0.99:22-50.85.169.122:36230.service - OpenSSH per-connection server daemon (50.85.169.122:36230). Apr 17 23:44:08.392329 sshd[1644]: Accepted publickey for core from 50.85.169.122 port 36230 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:44:08.394259 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:08.400979 systemd-logind[1441]: New session 3 of user core. Apr 17 23:44:08.408023 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:44:08.862315 sshd[1644]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:08.868389 systemd[1]: sshd@2-10.128.0.99:22-50.85.169.122:36230.service: Deactivated successfully. Apr 17 23:44:08.870874 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:44:08.871835 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:44:08.873519 systemd-logind[1441]: Removed session 3. Apr 17 23:44:08.987145 systemd[1]: Started sshd@3-10.128.0.99:22-50.85.169.122:36246.service - OpenSSH per-connection server daemon (50.85.169.122:36246). Apr 17 23:44:09.695857 sshd[1651]: Accepted publickey for core from 50.85.169.122 port 36246 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:44:09.697803 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:09.704642 systemd-logind[1441]: New session 4 of user core. Apr 17 23:44:09.708072 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:44:10.189058 sshd[1651]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:10.194805 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:44:10.196032 systemd[1]: sshd@3-10.128.0.99:22-50.85.169.122:36246.service: Deactivated successfully. Apr 17 23:44:10.198486 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:44:10.199840 systemd-logind[1441]: Removed session 4. Apr 17 23:44:10.314942 systemd[1]: Started sshd@4-10.128.0.99:22-50.85.169.122:50740.service - OpenSSH per-connection server daemon (50.85.169.122:50740). Apr 17 23:44:11.031691 sshd[1658]: Accepted publickey for core from 50.85.169.122 port 50740 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:44:11.033619 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:11.040333 systemd-logind[1441]: New session 5 of user core. Apr 17 23:44:11.050049 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:44:11.436932 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:44:11.437456 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:11.454653 sudo[1661]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:11.567841 sshd[1658]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:11.572713 systemd[1]: sshd@4-10.128.0.99:22-50.85.169.122:50740.service: Deactivated successfully. Apr 17 23:44:11.575290 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:44:11.577154 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:44:11.579017 systemd-logind[1441]: Removed session 5. Apr 17 23:44:11.692220 systemd[1]: Started sshd@5-10.128.0.99:22-50.85.169.122:50748.service - OpenSSH per-connection server daemon (50.85.169.122:50748). Apr 17 23:44:12.361009 sshd[1666]: Accepted publickey for core from 50.85.169.122 port 50748 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:44:12.362118 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:12.368864 systemd-logind[1441]: New session 6 of user core. Apr 17 23:44:12.380089 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:44:12.730861 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:44:12.731379 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:12.737050 sudo[1670]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:12.751028 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:44:12.751534 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:12.767642 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:44:12.772680 auditctl[1673]: No rules Apr 17 23:44:12.773344 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:44:12.773716 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:44:12.777299 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:44:12.823279 augenrules[1691]: No rules Apr 17 23:44:12.825083 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:44:12.827078 sudo[1669]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:12.934373 sshd[1666]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:12.940227 systemd[1]: sshd@5-10.128.0.99:22-50.85.169.122:50748.service: Deactivated successfully. Apr 17 23:44:12.942572 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:44:12.943562 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:44:12.945102 systemd-logind[1441]: Removed session 6. Apr 17 23:44:13.063187 systemd[1]: Started sshd@6-10.128.0.99:22-50.85.169.122:50760.service - OpenSSH per-connection server daemon (50.85.169.122:50760). Apr 17 23:44:13.765803 sshd[1699]: Accepted publickey for core from 50.85.169.122 port 50760 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:44:13.767092 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:13.773551 systemd-logind[1441]: New session 7 of user core. Apr 17 23:44:13.784082 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:44:14.153007 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:44:14.153522 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:14.603202 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:44:14.607470 (dockerd)[1718]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:44:15.061188 dockerd[1718]: time="2026-04-17T23:44:15.061023289Z" level=info msg="Starting up" Apr 17 23:44:15.204636 dockerd[1718]: time="2026-04-17T23:44:15.204573578Z" level=info msg="Loading containers: start." Apr 17 23:44:15.363923 kernel: Initializing XFRM netlink socket Apr 17 23:44:15.476877 systemd-networkd[1365]: docker0: Link UP Apr 17 23:44:15.499824 dockerd[1718]: time="2026-04-17T23:44:15.499741815Z" level=info msg="Loading containers: done." Apr 17 23:44:15.522702 dockerd[1718]: time="2026-04-17T23:44:15.522640198Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:44:15.522946 dockerd[1718]: time="2026-04-17T23:44:15.522810779Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:44:15.523004 dockerd[1718]: time="2026-04-17T23:44:15.522971675Z" level=info msg="Daemon has completed initialization" Apr 17 23:44:15.569269 dockerd[1718]: time="2026-04-17T23:44:15.568217243Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:44:15.568544 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:44:16.466493 containerd[1454]: time="2026-04-17T23:44:16.466437633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:44:17.083659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774876543.mount: Deactivated successfully. Apr 17 23:44:17.513267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:44:17.521094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:17.960538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:17.972387 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:44:18.044931 kubelet[1918]: E0417 23:44:18.044849 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:44:18.052374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:44:18.052910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:44:18.959572 containerd[1454]: time="2026-04-17T23:44:18.959496032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:18.961282 containerd[1454]: time="2026-04-17T23:44:18.961212333Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193995" Apr 17 23:44:18.962846 containerd[1454]: time="2026-04-17T23:44:18.962744813Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:18.967897 containerd[1454]: time="2026-04-17T23:44:18.967827401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:18.970269 containerd[1454]: time="2026-04-17T23:44:18.969692182Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.503195661s" Apr 17 23:44:18.970269 containerd[1454]: time="2026-04-17T23:44:18.969768827Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:44:18.970898 containerd[1454]: time="2026-04-17T23:44:18.970868723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:44:20.825585 containerd[1454]: time="2026-04-17T23:44:20.825509576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:20.827347 containerd[1454]: time="2026-04-17T23:44:20.827276323Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171453" Apr 17 23:44:20.828940 containerd[1454]: time="2026-04-17T23:44:20.828576593Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:20.832572 containerd[1454]: time="2026-04-17T23:44:20.832504403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:20.834359 containerd[1454]: time="2026-04-17T23:44:20.834184551Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.863194629s" Apr 17 23:44:20.834359 containerd[1454]: time="2026-04-17T23:44:20.834234406Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:44:20.835050 containerd[1454]: time="2026-04-17T23:44:20.835000275Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:44:22.416675 containerd[1454]: time="2026-04-17T23:44:22.416600568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:22.418396 containerd[1454]: time="2026-04-17T23:44:22.418309426Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289762" Apr 17 23:44:22.419795 containerd[1454]: time="2026-04-17T23:44:22.419670975Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:22.423891 containerd[1454]: time="2026-04-17T23:44:22.423810050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:22.425489 containerd[1454]: time="2026-04-17T23:44:22.425323726Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.590119807s" Apr 17 23:44:22.425489 containerd[1454]: time="2026-04-17T23:44:22.425373033Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:44:22.426342 containerd[1454]: time="2026-04-17T23:44:22.426205837Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:44:23.686050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180054818.mount: Deactivated successfully. Apr 17 23:44:24.387424 containerd[1454]: time="2026-04-17T23:44:24.387345912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:24.388922 containerd[1454]: time="2026-04-17T23:44:24.388848925Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010717" Apr 17 23:44:24.390602 containerd[1454]: time="2026-04-17T23:44:24.390515656Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:24.393739 containerd[1454]: time="2026-04-17T23:44:24.393664490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:24.395076 containerd[1454]: time="2026-04-17T23:44:24.394556888Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.968077851s" Apr 17 23:44:24.395076 containerd[1454]: time="2026-04-17T23:44:24.394623976Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:44:24.395664 containerd[1454]: time="2026-04-17T23:44:24.395576299Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:44:24.920830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200635162.mount: Deactivated successfully. Apr 17 23:44:26.119404 containerd[1454]: time="2026-04-17T23:44:26.119326966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:26.121279 containerd[1454]: time="2026-04-17T23:44:26.121167527Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Apr 17 23:44:26.123538 containerd[1454]: time="2026-04-17T23:44:26.123079180Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:26.127206 containerd[1454]: time="2026-04-17T23:44:26.127136915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:26.129787 containerd[1454]: time="2026-04-17T23:44:26.128658677Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.733003898s" Apr 17 23:44:26.129787 containerd[1454]: time="2026-04-17T23:44:26.128720504Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:44:26.129787 containerd[1454]: time="2026-04-17T23:44:26.129711244Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:44:26.631988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475053329.mount: Deactivated successfully. Apr 17 23:44:26.641865 containerd[1454]: time="2026-04-17T23:44:26.641793687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:26.643118 containerd[1454]: time="2026-04-17T23:44:26.643054968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Apr 17 23:44:26.644785 containerd[1454]: time="2026-04-17T23:44:26.644718429Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:26.649872 containerd[1454]: time="2026-04-17T23:44:26.649743093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:26.651239 containerd[1454]: time="2026-04-17T23:44:26.650955152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.205905ms" Apr 17 23:44:26.651239 containerd[1454]: time="2026-04-17T23:44:26.651007565Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:44:26.652459 containerd[1454]: time="2026-04-17T23:44:26.652193276Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:44:27.186346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636078799.mount: Deactivated successfully. Apr 17 23:44:28.303541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:44:28.311055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:28.649031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:28.662461 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:44:28.698782 containerd[1454]: time="2026-04-17T23:44:28.697058911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:28.700422 containerd[1454]: time="2026-04-17T23:44:28.700361910Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719432" Apr 17 23:44:28.701974 containerd[1454]: time="2026-04-17T23:44:28.701924305Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:28.709445 containerd[1454]: time="2026-04-17T23:44:28.709354965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:28.711996 containerd[1454]: time="2026-04-17T23:44:28.711804169Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.059565903s" Apr 17 23:44:28.711996 containerd[1454]: time="2026-04-17T23:44:28.711859134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:44:28.725008 kubelet[2067]: E0417 23:44:28.724952 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:44:28.730023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:44:28.730464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:44:32.390714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:32.400139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:32.445994 systemd[1]: Reloading requested from client PID 2110 ('systemctl') (unit session-7.scope)... Apr 17 23:44:32.446020 systemd[1]: Reloading... Apr 17 23:44:32.622804 zram_generator::config[2157]: No configuration found. Apr 17 23:44:32.779297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:32.885849 systemd[1]: Reloading finished in 438 ms. Apr 17 23:44:32.956091 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:44:32.956232 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:44:32.956604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:32.962227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:33.240007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:33.245541 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:44:33.308565 kubelet[2202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:44:33.309851 kubelet[2202]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:44:33.309851 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:44:33.309851 kubelet[2202]: I0417 23:44:33.308952 2202 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:44:33.600369 kubelet[2202]: I0417 23:44:33.600221 2202 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:44:33.600369 kubelet[2202]: I0417 23:44:33.600261 2202 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:44:33.600661 kubelet[2202]: I0417 23:44:33.600623 2202 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:44:33.653009 kubelet[2202]: E0417 23:44:33.652953 2202 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.99:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:44:33.661786 kubelet[2202]: I0417 23:44:33.659674 2202 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:44:33.672341 kubelet[2202]: E0417 23:44:33.672258 2202 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:44:33.672341 kubelet[2202]: I0417 23:44:33.672317 2202 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:44:33.676333 kubelet[2202]: I0417 23:44:33.676286 2202 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:44:33.676701 kubelet[2202]: I0417 23:44:33.676633 2202 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:44:33.676971 kubelet[2202]: I0417 23:44:33.676678 2202 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:44:33.676971 kubelet[2202]: I0417 23:44:33.676964 2202 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:44:33.677208 kubelet[2202]: I0417 23:44:33.676983 2202 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:44:33.677208 kubelet[2202]: I0417 23:44:33.677171 2202 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:44:33.685621 kubelet[2202]: I0417 23:44:33.685430 2202 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:44:33.685621 kubelet[2202]: I0417 23:44:33.685478 2202 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:44:33.685621 kubelet[2202]: I0417 23:44:33.685527 2202 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:44:33.685621 kubelet[2202]: I0417 23:44:33.685567 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:44:33.691067 kubelet[2202]: E0417 23:44:33.690027 2202 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18&limit=500&resourceVersion=0\": dial tcp 10.128.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:44:33.691067 kubelet[2202]: E0417 23:44:33.690646 2202 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:44:33.691495 kubelet[2202]: I0417 23:44:33.691470 2202 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:44:33.694179 kubelet[2202]: I0417 23:44:33.694134 2202 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:44:33.695989 kubelet[2202]: W0417 23:44:33.695938 2202 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:44:33.715737 kubelet[2202]: I0417 23:44:33.715679 2202 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:44:33.715737 kubelet[2202]: I0417 23:44:33.715771 2202 server.go:1289] "Started kubelet" Apr 17 23:44:33.716131 kubelet[2202]: I0417 23:44:33.716085 2202 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:44:33.718386 kubelet[2202]: I0417 23:44:33.718186 2202 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:44:33.721134 kubelet[2202]: I0417 23:44:33.721031 2202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:44:33.721843 kubelet[2202]: I0417 23:44:33.721796 2202 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:44:33.724194 kubelet[2202]: E0417 23:44:33.722011 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18.18a74992ff265890 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,UID:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,},FirstTimestamp:2026-04-17 23:44:33.715706 +0000 UTC m=+0.463378247,LastTimestamp:2026-04-17 23:44:33.715706 +0000 UTC m=+0.463378247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,}" Apr 17 23:44:33.728110 kubelet[2202]: I0417 23:44:33.727841 2202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:44:33.728110 kubelet[2202]: E0417 23:44:33.728063 2202 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:44:33.728285 kubelet[2202]: I0417 23:44:33.728241 2202 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:44:33.731421 kubelet[2202]: E0417 23:44:33.730996 2202 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" Apr 17 23:44:33.731421 kubelet[2202]: I0417 23:44:33.731410 2202 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:44:33.733641 kubelet[2202]: I0417 23:44:33.731685 2202 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:44:33.733641 kubelet[2202]: I0417 23:44:33.731774 2202 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:44:33.733641 kubelet[2202]: I0417 23:44:33.732701 2202 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:44:33.733641 kubelet[2202]: I0417 23:44:33.732846 2202 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:44:33.733641 kubelet[2202]: E0417 23:44:33.733408 2202 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:44:33.736387 kubelet[2202]: I0417 23:44:33.736361 2202 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:44:33.756596 kubelet[2202]: E0417 23:44:33.756538 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18?timeout=10s\": dial tcp 10.128.0.99:6443: connect: connection refused" interval="200ms" Apr 17 23:44:33.770348 kubelet[2202]: I0417 23:44:33.770305 2202 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:44:33.773142 kubelet[2202]: I0417 23:44:33.773107 2202 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:44:33.773142 kubelet[2202]: I0417 23:44:33.773137 2202 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:44:33.773309 kubelet[2202]: I0417 23:44:33.773161 2202 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:44:33.775837 kubelet[2202]: I0417 23:44:33.775806 2202 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:44:33.776029 kubelet[2202]: I0417 23:44:33.776012 2202 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:44:33.776728 kubelet[2202]: I0417 23:44:33.776701 2202 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:44:33.777645 kubelet[2202]: I0417 23:44:33.777623 2202 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:44:33.777868 kubelet[2202]: E0417 23:44:33.777828 2202 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:44:33.778629 kubelet[2202]: E0417 23:44:33.778174 2202 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:44:33.778629 kubelet[2202]: I0417 23:44:33.778272 2202 policy_none.go:49] "None policy: Start" Apr 17 23:44:33.778629 kubelet[2202]: I0417 23:44:33.778294 2202 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:44:33.778629 kubelet[2202]: I0417 23:44:33.778312 2202 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:44:33.788559 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:44:33.800904 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:44:33.807309 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:44:33.820491 kubelet[2202]: E0417 23:44:33.818913 2202 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:44:33.820491 kubelet[2202]: I0417 23:44:33.819134 2202 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:44:33.820491 kubelet[2202]: I0417 23:44:33.819147 2202 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:44:33.820491 kubelet[2202]: I0417 23:44:33.819845 2202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:44:33.821499 kubelet[2202]: E0417 23:44:33.821246 2202 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:44:33.821842 kubelet[2202]: E0417 23:44:33.821817 2202 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" Apr 17 23:44:33.899120 systemd[1]: Created slice kubepods-burstable-pod77fe7d600296684bce0ea34962ad346e.slice - libcontainer container kubepods-burstable-pod77fe7d600296684bce0ea34962ad346e.slice. Apr 17 23:44:33.915072 kubelet[2202]: E0417 23:44:33.914678 2202 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.916685 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:44:33.925027 kubelet[2202]: I0417 23:44:33.924960 2202 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.926343 kubelet[2202]: E0417 23:44:33.926303 2202 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.99:6443/api/v1/nodes\": dial tcp 10.128.0.99:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.929716 systemd[1]: Created slice kubepods-burstable-pod0b86d1498f769c8b769fa909dbcbcdf6.slice - libcontainer container kubepods-burstable-pod0b86d1498f769c8b769fa909dbcbcdf6.slice. Apr 17 23:44:33.933159 kubelet[2202]: I0417 23:44:33.933119 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.933305 kubelet[2202]: I0417 23:44:33.933198 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77fe7d600296684bce0ea34962ad346e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"77fe7d600296684bce0ea34962ad346e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.933305 kubelet[2202]: I0417 23:44:33.933235 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77fe7d600296684bce0ea34962ad346e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"77fe7d600296684bce0ea34962ad346e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.933305 kubelet[2202]: I0417 23:44:33.933266 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.933461 kubelet[2202]: I0417 23:44:33.933316 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.933461 kubelet[2202]: I0417 23:44:33.933351 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.933461 kubelet[2202]: I0417 23:44:33.933380 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22a8cb67dbcfbb7ce9aa6304e016fda3-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"22a8cb67dbcfbb7ce9aa6304e016fda3\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.933461 kubelet[2202]: I0417 23:44:33.933409 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77fe7d600296684bce0ea34962ad346e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"77fe7d600296684bce0ea34962ad346e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.933610 kubelet[2202]: I0417 23:44:33.933445 2202 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.936669 kubelet[2202]: E0417 23:44:33.936613 2202 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.941468 systemd[1]: Created slice kubepods-burstable-pod22a8cb67dbcfbb7ce9aa6304e016fda3.slice - libcontainer container kubepods-burstable-pod22a8cb67dbcfbb7ce9aa6304e016fda3.slice. Apr 17 23:44:33.943978 kubelet[2202]: E0417 23:44:33.943941 2202 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:33.957486 kubelet[2202]: E0417 23:44:33.957382 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18?timeout=10s\": dial tcp 10.128.0.99:6443: connect: connection refused" interval="400ms" Apr 17 23:44:34.135530 kubelet[2202]: I0417 23:44:34.135489 2202 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:34.136075 kubelet[2202]: E0417 23:44:34.136019 2202 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.99:6443/api/v1/nodes\": dial tcp 10.128.0.99:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:34.216445 containerd[1454]: time="2026-04-17T23:44:34.216289211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,Uid:77fe7d600296684bce0ea34962ad346e,Namespace:kube-system,Attempt:0,}" Apr 17 23:44:34.237980 containerd[1454]: time="2026-04-17T23:44:34.237786172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,Uid:0b86d1498f769c8b769fa909dbcbcdf6,Namespace:kube-system,Attempt:0,}" Apr 17 23:44:34.250783 containerd[1454]: time="2026-04-17T23:44:34.250716265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,Uid:22a8cb67dbcfbb7ce9aa6304e016fda3,Namespace:kube-system,Attempt:0,}" Apr 17 23:44:34.358441 kubelet[2202]: E0417 23:44:34.358318 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18?timeout=10s\": dial tcp 10.128.0.99:6443: connect: connection refused" interval="800ms" Apr 17 23:44:34.541366 kubelet[2202]: I0417 23:44:34.541199 2202 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:34.541968 kubelet[2202]: E0417 23:44:34.541770 2202 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.99:6443/api/v1/nodes\": dial tcp 10.128.0.99:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:34.728846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872898874.mount: Deactivated successfully. Apr 17 23:44:34.738521 containerd[1454]: time="2026-04-17T23:44:34.738456538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:34.739794 containerd[1454]: time="2026-04-17T23:44:34.739711191Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 17 23:44:34.741405 containerd[1454]: time="2026-04-17T23:44:34.741339405Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:34.744002 containerd[1454]: time="2026-04-17T23:44:34.743940823Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:44:34.744118 containerd[1454]: time="2026-04-17T23:44:34.744052852Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:34.745149 containerd[1454]: time="2026-04-17T23:44:34.745049133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:34.747448 containerd[1454]: time="2026-04-17T23:44:34.747392538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.994736ms" Apr 17 23:44:34.747910 containerd[1454]: time="2026-04-17T23:44:34.747851082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:44:34.747988 containerd[1454]: time="2026-04-17T23:44:34.747954153Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:34.752541 kubelet[2202]: E0417 23:44:34.752498 2202 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:44:34.756625 containerd[1454]: time="2026-04-17T23:44:34.755622589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 504.61901ms" Apr 17 23:44:34.772534 kubelet[2202]: E0417 23:44:34.772476 2202 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:44:34.793503 containerd[1454]: time="2026-04-17T23:44:34.792019940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.119277ms" Apr 17 23:44:34.976745 containerd[1454]: time="2026-04-17T23:44:34.976278451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:44:34.976745 containerd[1454]: time="2026-04-17T23:44:34.976355646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:44:34.976745 containerd[1454]: time="2026-04-17T23:44:34.976409132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:34.976745 containerd[1454]: time="2026-04-17T23:44:34.976589882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:34.981229 containerd[1454]: time="2026-04-17T23:44:34.980976966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:44:34.981229 containerd[1454]: time="2026-04-17T23:44:34.981176444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:44:34.984037 containerd[1454]: time="2026-04-17T23:44:34.983796583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:34.984387 containerd[1454]: time="2026-04-17T23:44:34.984315457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:34.990681 containerd[1454]: time="2026-04-17T23:44:34.990075429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:44:34.990681 containerd[1454]: time="2026-04-17T23:44:34.990328387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:44:34.990681 containerd[1454]: time="2026-04-17T23:44:34.990349716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:34.990681 containerd[1454]: time="2026-04-17T23:44:34.990474247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:35.025061 systemd[1]: Started cri-containerd-d89b81d1ed15081a1bd1082fe673015049762f097bf0448f5b6a0b12f6154dab.scope - libcontainer container d89b81d1ed15081a1bd1082fe673015049762f097bf0448f5b6a0b12f6154dab. Apr 17 23:44:35.041035 systemd[1]: Started cri-containerd-aab319e59fcf648b0df74a253974d31620017e8b27ffdeb852c082bc581e8a74.scope - libcontainer container aab319e59fcf648b0df74a253974d31620017e8b27ffdeb852c082bc581e8a74. Apr 17 23:44:35.054255 systemd[1]: Started cri-containerd-3a5a2075f23c6cac18e304fe7ad67a0d7b977fda5a0af05f2f30245f953f2af4.scope - libcontainer container 3a5a2075f23c6cac18e304fe7ad67a0d7b977fda5a0af05f2f30245f953f2af4. Apr 17 23:44:35.141722 containerd[1454]: time="2026-04-17T23:44:35.141659961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,Uid:77fe7d600296684bce0ea34962ad346e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d89b81d1ed15081a1bd1082fe673015049762f097bf0448f5b6a0b12f6154dab\"" Apr 17 23:44:35.149040 containerd[1454]: time="2026-04-17T23:44:35.148900980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,Uid:0b86d1498f769c8b769fa909dbcbcdf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"aab319e59fcf648b0df74a253974d31620017e8b27ffdeb852c082bc581e8a74\"" Apr 17 23:44:35.151563 kubelet[2202]: E0417 23:44:35.151518 2202 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86a" Apr 17 23:44:35.157474 kubelet[2202]: E0417 23:44:35.157429 2202 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d" Apr 17 23:44:35.160035 containerd[1454]: time="2026-04-17T23:44:35.159987774Z" level=info msg="CreateContainer within sandbox \"d89b81d1ed15081a1bd1082fe673015049762f097bf0448f5b6a0b12f6154dab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:44:35.162274 kubelet[2202]: E0417 23:44:35.160868 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18?timeout=10s\": dial tcp 10.128.0.99:6443: connect: connection refused" interval="1.6s" Apr 17 23:44:35.164785 containerd[1454]: time="2026-04-17T23:44:35.164714883Z" level=info msg="CreateContainer within sandbox \"aab319e59fcf648b0df74a253974d31620017e8b27ffdeb852c082bc581e8a74\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:44:35.172356 kubelet[2202]: E0417 23:44:35.172302 2202 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18&limit=500&resourceVersion=0\": dial tcp 10.128.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:44:35.183465 containerd[1454]: time="2026-04-17T23:44:35.183115469Z" level=info msg="CreateContainer within sandbox \"d89b81d1ed15081a1bd1082fe673015049762f097bf0448f5b6a0b12f6154dab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54885eebd8721d5ea3f266014dc53818074e14da17b231fec1e4355e698336da\"" Apr 17 23:44:35.185540 containerd[1454]: time="2026-04-17T23:44:35.185475898Z" level=info msg="StartContainer for \"54885eebd8721d5ea3f266014dc53818074e14da17b231fec1e4355e698336da\"" Apr 17 23:44:35.188316 containerd[1454]: time="2026-04-17T23:44:35.188273824Z" level=info msg="CreateContainer within sandbox \"aab319e59fcf648b0df74a253974d31620017e8b27ffdeb852c082bc581e8a74\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"337990786627f9585b44dd0997722b06f4ab3b45c4926104d722e430aad3ed64\"" Apr 17 23:44:35.188965 containerd[1454]: time="2026-04-17T23:44:35.188931190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,Uid:22a8cb67dbcfbb7ce9aa6304e016fda3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a5a2075f23c6cac18e304fe7ad67a0d7b977fda5a0af05f2f30245f953f2af4\"" Apr 17 23:44:35.189180 containerd[1454]: time="2026-04-17T23:44:35.189007370Z" level=info msg="StartContainer for \"337990786627f9585b44dd0997722b06f4ab3b45c4926104d722e430aad3ed64\"" Apr 17 23:44:35.191164 kubelet[2202]: E0417 23:44:35.191122 2202 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86a" Apr 17 23:44:35.195074 containerd[1454]: time="2026-04-17T23:44:35.195021099Z" level=info msg="CreateContainer within sandbox \"3a5a2075f23c6cac18e304fe7ad67a0d7b977fda5a0af05f2f30245f953f2af4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:44:35.219926 containerd[1454]: time="2026-04-17T23:44:35.219726977Z" level=info msg="CreateContainer within sandbox \"3a5a2075f23c6cac18e304fe7ad67a0d7b977fda5a0af05f2f30245f953f2af4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ccca74910a91906f1d90cdc2bfb3a97b2f3718d5228c45289e1e084836b4478\"" Apr 17 23:44:35.221942 containerd[1454]: time="2026-04-17T23:44:35.220460741Z" level=info msg="StartContainer for \"0ccca74910a91906f1d90cdc2bfb3a97b2f3718d5228c45289e1e084836b4478\"" Apr 17 23:44:35.246951 systemd[1]: Started cri-containerd-337990786627f9585b44dd0997722b06f4ab3b45c4926104d722e430aad3ed64.scope - libcontainer container 337990786627f9585b44dd0997722b06f4ab3b45c4926104d722e430aad3ed64. Apr 17 23:44:35.260653 systemd[1]: Started cri-containerd-54885eebd8721d5ea3f266014dc53818074e14da17b231fec1e4355e698336da.scope - libcontainer container 54885eebd8721d5ea3f266014dc53818074e14da17b231fec1e4355e698336da. Apr 17 23:44:35.282817 kubelet[2202]: E0417 23:44:35.282727 2202 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:44:35.298240 systemd[1]: Started cri-containerd-0ccca74910a91906f1d90cdc2bfb3a97b2f3718d5228c45289e1e084836b4478.scope - libcontainer container 0ccca74910a91906f1d90cdc2bfb3a97b2f3718d5228c45289e1e084836b4478. Apr 17 23:44:35.349152 kubelet[2202]: I0417 23:44:35.347182 2202 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:35.349152 kubelet[2202]: E0417 23:44:35.347613 2202 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.99:6443/api/v1/nodes\": dial tcp 10.128.0.99:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:35.370158 containerd[1454]: time="2026-04-17T23:44:35.370101191Z" level=info msg="StartContainer for \"337990786627f9585b44dd0997722b06f4ab3b45c4926104d722e430aad3ed64\" returns successfully" Apr 17 23:44:35.406170 containerd[1454]: time="2026-04-17T23:44:35.405402548Z" level=info msg="StartContainer for \"54885eebd8721d5ea3f266014dc53818074e14da17b231fec1e4355e698336da\" returns successfully" Apr 17 23:44:35.431853 containerd[1454]: time="2026-04-17T23:44:35.431714281Z" level=info msg="StartContainer for \"0ccca74910a91906f1d90cdc2bfb3a97b2f3718d5228c45289e1e084836b4478\" returns successfully" Apr 17 23:44:35.803719 kubelet[2202]: E0417 23:44:35.803677 2202 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:35.805240 kubelet[2202]: E0417 23:44:35.805203 2202 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:35.809909 kubelet[2202]: E0417 23:44:35.809879 2202 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:36.816893 kubelet[2202]: E0417 23:44:36.816849 2202 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:36.820357 kubelet[2202]: E0417 23:44:36.820318 2202 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:36.953524 kubelet[2202]: I0417 23:44:36.953476 2202 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.130372 kubelet[2202]: E0417 23:44:38.130324 2202 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.181593 kubelet[2202]: E0417 23:44:38.181426 2202 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18.18a74992ff265890 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,UID:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,},FirstTimestamp:2026-04-17 23:44:33.715706 +0000 UTC m=+0.463378247,LastTimestamp:2026-04-17 23:44:33.715706 +0000 UTC m=+0.463378247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,}" Apr 17 23:44:38.230862 kubelet[2202]: I0417 23:44:38.230087 2202 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.230862 kubelet[2202]: E0417 23:44:38.230147 2202 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\": node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" not found" Apr 17 23:44:38.239778 kubelet[2202]: I0417 23:44:38.238338 2202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.253855 kubelet[2202]: E0417 23:44:38.253655 2202 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18.18a74992ffe28538 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,UID:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,},FirstTimestamp:2026-04-17 23:44:33.7280382 +0000 UTC m=+0.475710443,LastTimestamp:2026-04-17 23:44:33.7280382 +0000 UTC m=+0.475710443,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18,}" Apr 17 23:44:38.279510 kubelet[2202]: E0417 23:44:38.279451 2202 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.279510 kubelet[2202]: I0417 23:44:38.279506 2202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.285112 kubelet[2202]: E0417 23:44:38.285054 2202 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.285112 kubelet[2202]: I0417 23:44:38.285106 2202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.289446 kubelet[2202]: E0417 23:44:38.289385 2202 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.695277 kubelet[2202]: I0417 23:44:38.695236 2202 apiserver.go:52] "Watching apiserver" Apr 17 23:44:38.732242 kubelet[2202]: I0417 23:44:38.732126 2202 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:44:38.783369 kubelet[2202]: I0417 23:44:38.783321 2202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:38.786496 kubelet[2202]: E0417 23:44:38.786446 2202 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:40.238947 systemd[1]: Reloading requested from client PID 2492 ('systemctl') (unit session-7.scope)... Apr 17 23:44:40.238971 systemd[1]: Reloading... Apr 17 23:44:40.371793 zram_generator::config[2528]: No configuration found. Apr 17 23:44:40.530206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:40.570809 kubelet[2202]: I0417 23:44:40.570590 2202 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:40.582380 kubelet[2202]: I0417 23:44:40.581544 2202 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 17 23:44:40.701570 systemd[1]: Reloading finished in 461 ms. Apr 17 23:44:40.768423 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:40.786061 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:44:40.786531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:40.793674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:41.076042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:41.091499 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:44:41.168598 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:44:41.168598 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:44:41.168598 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:44:41.170056 kubelet[2580]: I0417 23:44:41.168695 2580 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:44:41.181079 kubelet[2580]: I0417 23:44:41.181021 2580 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:44:41.181079 kubelet[2580]: I0417 23:44:41.181057 2580 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:44:41.181414 kubelet[2580]: I0417 23:44:41.181377 2580 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:44:41.182919 kubelet[2580]: I0417 23:44:41.182877 2580 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:44:41.185721 kubelet[2580]: I0417 23:44:41.185665 2580 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:44:41.190768 kubelet[2580]: E0417 23:44:41.190712 2580 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:44:41.190899 kubelet[2580]: I0417 23:44:41.190883 2580 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:44:41.196788 kubelet[2580]: I0417 23:44:41.195727 2580 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:44:41.196788 kubelet[2580]: I0417 23:44:41.196487 2580 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:44:41.197219 kubelet[2580]: I0417 23:44:41.196526 2580 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:44:41.197518 kubelet[2580]: I0417 23:44:41.197500 2580 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:44:41.197641 kubelet[2580]: I0417 23:44:41.197630 2580 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:44:41.197869 kubelet[2580]: I0417 23:44:41.197838 2580 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:44:41.198323 kubelet[2580]: I0417 23:44:41.198306 2580 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:44:41.198444 kubelet[2580]: I0417 23:44:41.198432 2580 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:44:41.198589 kubelet[2580]: I0417 23:44:41.198577 2580 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:44:41.198711 kubelet[2580]: I0417 23:44:41.198699 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:44:41.204471 kubelet[2580]: I0417 23:44:41.204434 2580 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:44:41.205909 kubelet[2580]: I0417 23:44:41.205882 2580 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:44:41.242782 kubelet[2580]: I0417 23:44:41.241801 2580 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:44:41.242782 kubelet[2580]: I0417 23:44:41.241894 2580 server.go:1289] "Started kubelet" Apr 17 23:44:41.247279 kubelet[2580]: I0417 23:44:41.245793 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:44:41.247859 kubelet[2580]: I0417 23:44:41.247836 2580 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:44:41.251481 kubelet[2580]: I0417 23:44:41.250740 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:44:41.256389 kubelet[2580]: I0417 23:44:41.253497 2580 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:44:41.258470 kubelet[2580]: I0417 23:44:41.258252 2580 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:44:41.265168 kubelet[2580]: I0417 23:44:41.264068 2580 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:44:41.265168 kubelet[2580]: I0417 23:44:41.264347 2580 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:44:41.266541 kubelet[2580]: I0417 23:44:41.265319 2580 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:44:41.266541 kubelet[2580]: I0417 23:44:41.265501 2580 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:44:41.280788 kubelet[2580]: I0417 23:44:41.277936 2580 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:44:41.280788 kubelet[2580]: I0417 23:44:41.278103 2580 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:44:41.280788 kubelet[2580]: I0417 23:44:41.279267 2580 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:44:41.281235 kubelet[2580]: I0417 23:44:41.281206 2580 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:44:41.281235 kubelet[2580]: I0417 23:44:41.281237 2580 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:44:41.281422 kubelet[2580]: I0417 23:44:41.281263 2580 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:44:41.281422 kubelet[2580]: I0417 23:44:41.281274 2580 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:44:41.281422 kubelet[2580]: E0417 23:44:41.281329 2580 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:44:41.286644 kubelet[2580]: E0417 23:44:41.286611 2580 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:44:41.288033 kubelet[2580]: I0417 23:44:41.287988 2580 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:44:41.379424 kubelet[2580]: I0417 23:44:41.379386 2580 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:44:41.379654 kubelet[2580]: I0417 23:44:41.379634 2580 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:44:41.379810 kubelet[2580]: I0417 23:44:41.379797 2580 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:44:41.380918 kubelet[2580]: I0417 23:44:41.380048 2580 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:44:41.380918 kubelet[2580]: I0417 23:44:41.380065 2580 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:44:41.380918 kubelet[2580]: I0417 23:44:41.380095 2580 policy_none.go:49] "None policy: Start" Apr 17 23:44:41.380918 kubelet[2580]: I0417 23:44:41.380111 2580 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:44:41.380918 kubelet[2580]: I0417 23:44:41.380127 2580 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:44:41.380918 kubelet[2580]: I0417 23:44:41.380255 2580 state_mem.go:75] "Updated machine memory state" Apr 17 23:44:41.381535 kubelet[2580]: E0417 23:44:41.381514 2580 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 23:44:41.388279 kubelet[2580]: E0417 23:44:41.388228 2580 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:44:41.388495 kubelet[2580]: I0417 23:44:41.388473 2580 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:44:41.388574 kubelet[2580]: I0417 23:44:41.388500 2580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:44:41.389124 kubelet[2580]: I0417 23:44:41.389098 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:44:41.392804 kubelet[2580]: E0417 23:44:41.392656 2580 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:44:41.507324 kubelet[2580]: I0417 23:44:41.505978 2580 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.516734 kubelet[2580]: I0417 23:44:41.516680 2580 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.516899 kubelet[2580]: I0417 23:44:41.516806 2580 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.583513 kubelet[2580]: I0417 23:44:41.583427 2580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.586410 kubelet[2580]: I0417 23:44:41.584094 2580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.586410 kubelet[2580]: I0417 23:44:41.584500 2580 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.591898 kubelet[2580]: I0417 23:44:41.591862 2580 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 17 23:44:41.594838 kubelet[2580]: I0417 23:44:41.594668 2580 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 17 23:44:41.598104 kubelet[2580]: I0417 23:44:41.597900 2580 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Apr 17 23:44:41.598104 kubelet[2580]: E0417 23:44:41.597993 2580 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.668818 kubelet[2580]: I0417 23:44:41.667790 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22a8cb67dbcfbb7ce9aa6304e016fda3-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"22a8cb67dbcfbb7ce9aa6304e016fda3\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.668818 kubelet[2580]: I0417 23:44:41.667862 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77fe7d600296684bce0ea34962ad346e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"77fe7d600296684bce0ea34962ad346e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.668818 kubelet[2580]: I0417 23:44:41.667915 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.668818 kubelet[2580]: I0417 23:44:41.667948 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77fe7d600296684bce0ea34962ad346e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"77fe7d600296684bce0ea34962ad346e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.669135 kubelet[2580]: I0417 23:44:41.667983 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77fe7d600296684bce0ea34962ad346e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"77fe7d600296684bce0ea34962ad346e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.669135 kubelet[2580]: I0417 23:44:41.668013 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.669135 kubelet[2580]: I0417 23:44:41.668045 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.669135 kubelet[2580]: I0417 23:44:41.668075 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:41.669339 kubelet[2580]: I0417 23:44:41.668108 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b86d1498f769c8b769fa909dbcbcdf6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" (UID: \"0b86d1498f769c8b769fa909dbcbcdf6\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:44:42.215280 kubelet[2580]: I0417 23:44:42.214875 2580 apiserver.go:52] "Watching apiserver" Apr 17 23:44:42.266435 kubelet[2580]: I0417 23:44:42.266374 2580 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:44:42.406727 kubelet[2580]: I0417 23:44:42.406017 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" podStartSLOduration=2.405991532 podStartE2EDuration="2.405991532s" podCreationTimestamp="2026-04-17 23:44:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:44:42.405463188 +0000 UTC m=+1.303101622" watchObservedRunningTime="2026-04-17 23:44:42.405991532 +0000 UTC m=+1.303629956" Apr 17 23:44:42.406727 kubelet[2580]: I0417 23:44:42.406194 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" podStartSLOduration=1.406187058 podStartE2EDuration="1.406187058s" podCreationTimestamp="2026-04-17 23:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:44:42.388911357 +0000 UTC m=+1.286549792" watchObservedRunningTime="2026-04-17 23:44:42.406187058 +0000 UTC m=+1.303825482" Apr 17 23:44:42.425434 kubelet[2580]: I0417 23:44:42.425193 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" podStartSLOduration=1.425153807 podStartE2EDuration="1.425153807s" podCreationTimestamp="2026-04-17 23:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:44:42.420414738 +0000 UTC m=+1.318053172" watchObservedRunningTime="2026-04-17 23:44:42.425153807 +0000 UTC m=+1.322792239" Apr 17 23:44:45.467636 kubelet[2580]: I0417 23:44:45.467571 2580 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:44:45.468476 kubelet[2580]: I0417 23:44:45.468383 2580 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:44:45.468639 containerd[1454]: time="2026-04-17T23:44:45.468093539Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:44:46.236728 systemd[1]: Created slice kubepods-besteffort-pod8af785be_ae7a_4b10_baa4_6a911cef2d36.slice - libcontainer container kubepods-besteffort-pod8af785be_ae7a_4b10_baa4_6a911cef2d36.slice. Apr 17 23:44:46.296263 kubelet[2580]: I0417 23:44:46.295894 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8af785be-ae7a-4b10-baa4-6a911cef2d36-lib-modules\") pod \"kube-proxy-4hhrq\" (UID: \"8af785be-ae7a-4b10-baa4-6a911cef2d36\") " pod="kube-system/kube-proxy-4hhrq" Apr 17 23:44:46.296263 kubelet[2580]: I0417 23:44:46.295957 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6tl6\" (UniqueName: \"kubernetes.io/projected/8af785be-ae7a-4b10-baa4-6a911cef2d36-kube-api-access-r6tl6\") pod \"kube-proxy-4hhrq\" (UID: \"8af785be-ae7a-4b10-baa4-6a911cef2d36\") " pod="kube-system/kube-proxy-4hhrq" Apr 17 23:44:46.296263 kubelet[2580]: I0417 23:44:46.295993 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8af785be-ae7a-4b10-baa4-6a911cef2d36-kube-proxy\") pod \"kube-proxy-4hhrq\" (UID: \"8af785be-ae7a-4b10-baa4-6a911cef2d36\") " pod="kube-system/kube-proxy-4hhrq" Apr 17 23:44:46.296263 kubelet[2580]: I0417 23:44:46.296018 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8af785be-ae7a-4b10-baa4-6a911cef2d36-xtables-lock\") pod \"kube-proxy-4hhrq\" (UID: \"8af785be-ae7a-4b10-baa4-6a911cef2d36\") " pod="kube-system/kube-proxy-4hhrq" Apr 17 23:44:46.404570 kubelet[2580]: E0417 23:44:46.404510 2580 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 17 23:44:46.404570 kubelet[2580]: E0417 23:44:46.404555 2580 projected.go:194] Error preparing data for projected volume kube-api-access-r6tl6 for pod kube-system/kube-proxy-4hhrq: configmap "kube-root-ca.crt" not found Apr 17 23:44:46.404886 kubelet[2580]: E0417 23:44:46.404657 2580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8af785be-ae7a-4b10-baa4-6a911cef2d36-kube-api-access-r6tl6 podName:8af785be-ae7a-4b10-baa4-6a911cef2d36 nodeName:}" failed. No retries permitted until 2026-04-17 23:44:46.904627287 +0000 UTC m=+5.802265713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r6tl6" (UniqueName: "kubernetes.io/projected/8af785be-ae7a-4b10-baa4-6a911cef2d36-kube-api-access-r6tl6") pod "kube-proxy-4hhrq" (UID: "8af785be-ae7a-4b10-baa4-6a911cef2d36") : configmap "kube-root-ca.crt" not found Apr 17 23:44:46.622206 systemd[1]: Created slice kubepods-besteffort-pod5250d7ad_f8e7_4036_a938_0e8fab046aba.slice - libcontainer container kubepods-besteffort-pod5250d7ad_f8e7_4036_a938_0e8fab046aba.slice. Apr 17 23:44:46.698649 kubelet[2580]: I0417 23:44:46.698573 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5250d7ad-f8e7-4036-a938-0e8fab046aba-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-c2l98\" (UID: \"5250d7ad-f8e7-4036-a938-0e8fab046aba\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c2l98" Apr 17 23:44:46.698649 kubelet[2580]: I0417 23:44:46.698640 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvxdw\" (UniqueName: \"kubernetes.io/projected/5250d7ad-f8e7-4036-a938-0e8fab046aba-kube-api-access-vvxdw\") pod \"tigera-operator-6bf85f8dd-c2l98\" (UID: \"5250d7ad-f8e7-4036-a938-0e8fab046aba\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c2l98" Apr 17 23:44:46.928976 containerd[1454]: time="2026-04-17T23:44:46.928924011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c2l98,Uid:5250d7ad-f8e7-4036-a938-0e8fab046aba,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:44:46.970670 containerd[1454]: time="2026-04-17T23:44:46.970487332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:44:46.970670 containerd[1454]: time="2026-04-17T23:44:46.970579745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:44:46.970670 containerd[1454]: time="2026-04-17T23:44:46.970598876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:46.971000 containerd[1454]: time="2026-04-17T23:44:46.970739240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:47.003214 systemd[1]: Started cri-containerd-9b23e2f121677f1c5e3dcb2491d5938aaf2b72f812ae2fd828892811e616eea8.scope - libcontainer container 9b23e2f121677f1c5e3dcb2491d5938aaf2b72f812ae2fd828892811e616eea8. Apr 17 23:44:47.064613 containerd[1454]: time="2026-04-17T23:44:47.064499191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c2l98,Uid:5250d7ad-f8e7-4036-a938-0e8fab046aba,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9b23e2f121677f1c5e3dcb2491d5938aaf2b72f812ae2fd828892811e616eea8\"" Apr 17 23:44:47.068868 containerd[1454]: time="2026-04-17T23:44:47.068677057Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:44:47.146350 containerd[1454]: time="2026-04-17T23:44:47.146241347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4hhrq,Uid:8af785be-ae7a-4b10-baa4-6a911cef2d36,Namespace:kube-system,Attempt:0,}" Apr 17 23:44:47.180570 containerd[1454]: time="2026-04-17T23:44:47.179727858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:44:47.180570 containerd[1454]: time="2026-04-17T23:44:47.179822209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:44:47.180570 containerd[1454]: time="2026-04-17T23:44:47.179850405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:47.180960 containerd[1454]: time="2026-04-17T23:44:47.180378810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:47.213120 systemd[1]: Started cri-containerd-1424dfa82ea7ba308e49a0f8fd8e6fed5e96bdb24cce35a7669b851361f1bbd8.scope - libcontainer container 1424dfa82ea7ba308e49a0f8fd8e6fed5e96bdb24cce35a7669b851361f1bbd8. Apr 17 23:44:47.252604 containerd[1454]: time="2026-04-17T23:44:47.252261258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4hhrq,Uid:8af785be-ae7a-4b10-baa4-6a911cef2d36,Namespace:kube-system,Attempt:0,} returns sandbox id \"1424dfa82ea7ba308e49a0f8fd8e6fed5e96bdb24cce35a7669b851361f1bbd8\"" Apr 17 23:44:47.260050 containerd[1454]: time="2026-04-17T23:44:47.259858555Z" level=info msg="CreateContainer within sandbox \"1424dfa82ea7ba308e49a0f8fd8e6fed5e96bdb24cce35a7669b851361f1bbd8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:44:47.277961 containerd[1454]: time="2026-04-17T23:44:47.277886326Z" level=info msg="CreateContainer within sandbox \"1424dfa82ea7ba308e49a0f8fd8e6fed5e96bdb24cce35a7669b851361f1bbd8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c705a5aff6efc6c081758cd18abcc39220c6f9c5bc270206a3999439a27870f3\"" Apr 17 23:44:47.279976 containerd[1454]: time="2026-04-17T23:44:47.279681688Z" level=info msg="StartContainer for \"c705a5aff6efc6c081758cd18abcc39220c6f9c5bc270206a3999439a27870f3\"" Apr 17 23:44:47.321020 systemd[1]: Started cri-containerd-c705a5aff6efc6c081758cd18abcc39220c6f9c5bc270206a3999439a27870f3.scope - libcontainer container c705a5aff6efc6c081758cd18abcc39220c6f9c5bc270206a3999439a27870f3. Apr 17 23:44:47.363321 containerd[1454]: time="2026-04-17T23:44:47.363214978Z" level=info msg="StartContainer for \"c705a5aff6efc6c081758cd18abcc39220c6f9c5bc270206a3999439a27870f3\" returns successfully" Apr 17 23:44:48.124869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485872105.mount: Deactivated successfully. Apr 17 23:44:48.398445 kubelet[2580]: I0417 23:44:48.397062 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4hhrq" podStartSLOduration=2.397034728 podStartE2EDuration="2.397034728s" podCreationTimestamp="2026-04-17 23:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:44:48.395883672 +0000 UTC m=+7.293522104" watchObservedRunningTime="2026-04-17 23:44:48.397034728 +0000 UTC m=+7.294673162" Apr 17 23:44:48.547347 update_engine[1444]: I20260417 23:44:48.546808 1444 update_attempter.cc:509] Updating boot flags... Apr 17 23:44:48.676780 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2901) Apr 17 23:44:48.824815 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2903) Apr 17 23:44:49.882067 containerd[1454]: time="2026-04-17T23:44:49.881917489Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:49.883571 containerd[1454]: time="2026-04-17T23:44:49.883506197Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:44:49.885260 containerd[1454]: time="2026-04-17T23:44:49.885166612Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:49.890679 containerd[1454]: time="2026-04-17T23:44:49.890585810Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:49.892487 containerd[1454]: time="2026-04-17T23:44:49.891599352Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.822871645s" Apr 17 23:44:49.892487 containerd[1454]: time="2026-04-17T23:44:49.891648296Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:44:49.898265 containerd[1454]: time="2026-04-17T23:44:49.898179456Z" level=info msg="CreateContainer within sandbox \"9b23e2f121677f1c5e3dcb2491d5938aaf2b72f812ae2fd828892811e616eea8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:44:49.924730 containerd[1454]: time="2026-04-17T23:44:49.924646610Z" level=info msg="CreateContainer within sandbox \"9b23e2f121677f1c5e3dcb2491d5938aaf2b72f812ae2fd828892811e616eea8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"22f3a6fbd1a10cd18647d401e9e8ee822fac407bc698ca1518b8b5677d4b2c21\"" Apr 17 23:44:49.925518 containerd[1454]: time="2026-04-17T23:44:49.925479098Z" level=info msg="StartContainer for \"22f3a6fbd1a10cd18647d401e9e8ee822fac407bc698ca1518b8b5677d4b2c21\"" Apr 17 23:44:49.971062 systemd[1]: Started cri-containerd-22f3a6fbd1a10cd18647d401e9e8ee822fac407bc698ca1518b8b5677d4b2c21.scope - libcontainer container 22f3a6fbd1a10cd18647d401e9e8ee822fac407bc698ca1518b8b5677d4b2c21. Apr 17 23:44:50.011967 containerd[1454]: time="2026-04-17T23:44:50.011903967Z" level=info msg="StartContainer for \"22f3a6fbd1a10cd18647d401e9e8ee822fac407bc698ca1518b8b5677d4b2c21\" returns successfully" Apr 17 23:44:51.254886 kubelet[2580]: I0417 23:44:51.254284 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-c2l98" podStartSLOduration=2.428279072 podStartE2EDuration="5.254259466s" podCreationTimestamp="2026-04-17 23:44:46 +0000 UTC" firstStartedPulling="2026-04-17 23:44:47.066848083 +0000 UTC m=+5.964486494" lastFinishedPulling="2026-04-17 23:44:49.892828475 +0000 UTC m=+8.790466888" observedRunningTime="2026-04-17 23:44:50.409130907 +0000 UTC m=+9.306769338" watchObservedRunningTime="2026-04-17 23:44:51.254259466 +0000 UTC m=+10.151897899" Apr 17 23:44:57.164577 sudo[1702]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:57.280104 sshd[1699]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:57.293561 systemd[1]: sshd@6-10.128.0.99:22-50.85.169.122:50760.service: Deactivated successfully. Apr 17 23:44:57.298132 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:44:57.299353 systemd[1]: session-7.scope: Consumed 6.740s CPU time, 160.6M memory peak, 0B memory swap peak. Apr 17 23:44:57.301077 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:44:57.304826 systemd-logind[1441]: Removed session 7. Apr 17 23:45:02.260138 systemd[1]: Created slice kubepods-besteffort-pod9568ea1a_1062_43e1_a338_90ccc1329eba.slice - libcontainer container kubepods-besteffort-pod9568ea1a_1062_43e1_a338_90ccc1329eba.slice. Apr 17 23:45:02.309864 kubelet[2580]: I0417 23:45:02.309731 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9568ea1a-1062-43e1-a338-90ccc1329eba-tigera-ca-bundle\") pod \"calico-typha-7bbfb47978-qvw2w\" (UID: \"9568ea1a-1062-43e1-a338-90ccc1329eba\") " pod="calico-system/calico-typha-7bbfb47978-qvw2w" Apr 17 23:45:02.310493 kubelet[2580]: I0417 23:45:02.309937 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhwtg\" (UniqueName: \"kubernetes.io/projected/9568ea1a-1062-43e1-a338-90ccc1329eba-kube-api-access-fhwtg\") pod \"calico-typha-7bbfb47978-qvw2w\" (UID: \"9568ea1a-1062-43e1-a338-90ccc1329eba\") " pod="calico-system/calico-typha-7bbfb47978-qvw2w" Apr 17 23:45:02.310493 kubelet[2580]: I0417 23:45:02.309979 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9568ea1a-1062-43e1-a338-90ccc1329eba-typha-certs\") pod \"calico-typha-7bbfb47978-qvw2w\" (UID: \"9568ea1a-1062-43e1-a338-90ccc1329eba\") " pod="calico-system/calico-typha-7bbfb47978-qvw2w" Apr 17 23:45:02.389560 systemd[1]: Created slice kubepods-besteffort-poda3b33be5_572b_4cc1_9962_39103e8862a4.slice - libcontainer container kubepods-besteffort-poda3b33be5_572b_4cc1_9962_39103e8862a4.slice. Apr 17 23:45:02.411432 kubelet[2580]: I0417 23:45:02.411157 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-sys-fs\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.411432 kubelet[2580]: I0417 23:45:02.411302 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-xtables-lock\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.411432 kubelet[2580]: I0417 23:45:02.411365 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-policysync\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.411432 kubelet[2580]: I0417 23:45:02.411392 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-var-lib-calico\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.411924 kubelet[2580]: I0417 23:45:02.411861 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-cni-log-dir\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.411924 kubelet[2580]: I0417 23:45:02.411907 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-lib-modules\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.412077 kubelet[2580]: I0417 23:45:02.411981 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-flexvol-driver-host\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.412077 kubelet[2580]: I0417 23:45:02.412069 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpfh\" (UniqueName: \"kubernetes.io/projected/a3b33be5-572b-4cc1-9962-39103e8862a4-kube-api-access-bqpfh\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.412194 kubelet[2580]: I0417 23:45:02.412100 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-cni-bin-dir\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.412194 kubelet[2580]: I0417 23:45:02.412128 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3b33be5-572b-4cc1-9962-39103e8862a4-node-certs\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.412194 kubelet[2580]: I0417 23:45:02.412158 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3b33be5-572b-4cc1-9962-39103e8862a4-tigera-ca-bundle\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.412353 kubelet[2580]: I0417 23:45:02.412205 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-bpffs\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.412353 kubelet[2580]: I0417 23:45:02.412231 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-cni-net-dir\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.412353 kubelet[2580]: I0417 23:45:02.412290 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-var-run-calico\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.412353 kubelet[2580]: I0417 23:45:02.412318 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/a3b33be5-572b-4cc1-9962-39103e8862a4-nodeproc\") pod \"calico-node-pts4n\" (UID: \"a3b33be5-572b-4cc1-9962-39103e8862a4\") " pod="calico-system/calico-node-pts4n" Apr 17 23:45:02.491809 kubelet[2580]: E0417 23:45:02.489322 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:02.514706 kubelet[2580]: I0417 23:45:02.512943 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/175da1ed-b0db-4d24-bad8-f8db619e26a8-socket-dir\") pod \"csi-node-driver-q4qpd\" (UID: \"175da1ed-b0db-4d24-bad8-f8db619e26a8\") " pod="calico-system/csi-node-driver-q4qpd" Apr 17 23:45:02.514706 kubelet[2580]: I0417 23:45:02.513016 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/175da1ed-b0db-4d24-bad8-f8db619e26a8-varrun\") pod \"csi-node-driver-q4qpd\" (UID: \"175da1ed-b0db-4d24-bad8-f8db619e26a8\") " pod="calico-system/csi-node-driver-q4qpd" Apr 17 23:45:02.514706 kubelet[2580]: I0417 23:45:02.513232 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/175da1ed-b0db-4d24-bad8-f8db619e26a8-kubelet-dir\") pod \"csi-node-driver-q4qpd\" (UID: \"175da1ed-b0db-4d24-bad8-f8db619e26a8\") " pod="calico-system/csi-node-driver-q4qpd" Apr 17 23:45:02.514706 kubelet[2580]: I0417 23:45:02.513306 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlt7c\" (UniqueName: \"kubernetes.io/projected/175da1ed-b0db-4d24-bad8-f8db619e26a8-kube-api-access-wlt7c\") pod \"csi-node-driver-q4qpd\" (UID: \"175da1ed-b0db-4d24-bad8-f8db619e26a8\") " pod="calico-system/csi-node-driver-q4qpd" Apr 17 23:45:02.514706 kubelet[2580]: I0417 23:45:02.513373 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/175da1ed-b0db-4d24-bad8-f8db619e26a8-registration-dir\") pod \"csi-node-driver-q4qpd\" (UID: \"175da1ed-b0db-4d24-bad8-f8db619e26a8\") " pod="calico-system/csi-node-driver-q4qpd" Apr 17 23:45:02.555787 kubelet[2580]: E0417 23:45:02.553745 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.555787 kubelet[2580]: W0417 23:45:02.553798 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.555787 kubelet[2580]: E0417 23:45:02.553836 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.570935 containerd[1454]: time="2026-04-17T23:45:02.570880470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bbfb47978-qvw2w,Uid:9568ea1a-1062-43e1-a338-90ccc1329eba,Namespace:calico-system,Attempt:0,}" Apr 17 23:45:02.614717 kubelet[2580]: E0417 23:45:02.614605 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.614717 kubelet[2580]: W0417 23:45:02.614635 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.614717 kubelet[2580]: E0417 23:45:02.614719 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.616507 kubelet[2580]: E0417 23:45:02.615289 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.616507 kubelet[2580]: W0417 23:45:02.615311 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.616507 kubelet[2580]: E0417 23:45:02.615332 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.616507 kubelet[2580]: E0417 23:45:02.615864 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.616507 kubelet[2580]: W0417 23:45:02.615884 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.616507 kubelet[2580]: E0417 23:45:02.615905 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.617512 kubelet[2580]: E0417 23:45:02.617303 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.617512 kubelet[2580]: W0417 23:45:02.617323 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.617512 kubelet[2580]: E0417 23:45:02.617344 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.618583 kubelet[2580]: E0417 23:45:02.618563 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.618858 kubelet[2580]: W0417 23:45:02.618655 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.618858 kubelet[2580]: E0417 23:45:02.618677 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.619642 kubelet[2580]: E0417 23:45:02.619405 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.619642 kubelet[2580]: W0417 23:45:02.619454 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.619642 kubelet[2580]: E0417 23:45:02.619470 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.620442 kubelet[2580]: E0417 23:45:02.620244 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.620442 kubelet[2580]: W0417 23:45:02.620262 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.620442 kubelet[2580]: E0417 23:45:02.620279 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.621158 kubelet[2580]: E0417 23:45:02.620980 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.621158 kubelet[2580]: W0417 23:45:02.620999 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.621158 kubelet[2580]: E0417 23:45:02.621016 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.621788 kubelet[2580]: E0417 23:45:02.621549 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.621788 kubelet[2580]: W0417 23:45:02.621566 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.621788 kubelet[2580]: E0417 23:45:02.621584 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.622830 kubelet[2580]: E0417 23:45:02.622211 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.622830 kubelet[2580]: W0417 23:45:02.622229 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.622830 kubelet[2580]: E0417 23:45:02.622246 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.623126 kubelet[2580]: E0417 23:45:02.623108 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.623239 kubelet[2580]: W0417 23:45:02.623221 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.623337 kubelet[2580]: E0417 23:45:02.623321 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.625323 kubelet[2580]: E0417 23:45:02.625144 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.625323 kubelet[2580]: W0417 23:45:02.625167 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.625323 kubelet[2580]: E0417 23:45:02.625193 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.625735 kubelet[2580]: E0417 23:45:02.625716 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.625971 kubelet[2580]: W0417 23:45:02.625941 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.626103 kubelet[2580]: E0417 23:45:02.626078 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.627085 kubelet[2580]: E0417 23:45:02.627065 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.627222 kubelet[2580]: W0417 23:45:02.627199 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.627330 kubelet[2580]: E0417 23:45:02.627313 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.627864 kubelet[2580]: E0417 23:45:02.627844 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.628046 kubelet[2580]: W0417 23:45:02.627990 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.628046 kubelet[2580]: E0417 23:45:02.628016 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.628809 kubelet[2580]: E0417 23:45:02.628680 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.628809 kubelet[2580]: W0417 23:45:02.628698 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.628809 kubelet[2580]: E0417 23:45:02.628714 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.630945 kubelet[2580]: E0417 23:45:02.630175 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.630945 kubelet[2580]: W0417 23:45:02.630194 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.630945 kubelet[2580]: E0417 23:45:02.630210 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.630945 kubelet[2580]: E0417 23:45:02.630583 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.630945 kubelet[2580]: W0417 23:45:02.630597 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.630945 kubelet[2580]: E0417 23:45:02.630614 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.631528 kubelet[2580]: E0417 23:45:02.631368 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.631528 kubelet[2580]: W0417 23:45:02.631385 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.631528 kubelet[2580]: E0417 23:45:02.631401 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.633538 kubelet[2580]: E0417 23:45:02.632783 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.633538 kubelet[2580]: W0417 23:45:02.632803 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.633538 kubelet[2580]: E0417 23:45:02.632821 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.633538 kubelet[2580]: E0417 23:45:02.633380 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.633538 kubelet[2580]: W0417 23:45:02.633395 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.633538 kubelet[2580]: E0417 23:45:02.633411 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.634193 kubelet[2580]: E0417 23:45:02.633989 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.634193 kubelet[2580]: W0417 23:45:02.634008 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.634193 kubelet[2580]: E0417 23:45:02.634024 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.634669 containerd[1454]: time="2026-04-17T23:45:02.632317645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:02.634669 containerd[1454]: time="2026-04-17T23:45:02.632438658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:02.634669 containerd[1454]: time="2026-04-17T23:45:02.632484453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:02.634669 containerd[1454]: time="2026-04-17T23:45:02.632643060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:02.635251 kubelet[2580]: E0417 23:45:02.635027 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.635251 kubelet[2580]: W0417 23:45:02.635045 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.635251 kubelet[2580]: E0417 23:45:02.635061 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.636198 kubelet[2580]: E0417 23:45:02.635993 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.636198 kubelet[2580]: W0417 23:45:02.636008 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.636198 kubelet[2580]: E0417 23:45:02.636021 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.637362 kubelet[2580]: E0417 23:45:02.637151 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.637362 kubelet[2580]: W0417 23:45:02.637169 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.637362 kubelet[2580]: E0417 23:45:02.637186 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.659853 kubelet[2580]: E0417 23:45:02.659820 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:02.659853 kubelet[2580]: W0417 23:45:02.659855 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:02.660122 kubelet[2580]: E0417 23:45:02.659898 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:02.668060 systemd[1]: Started cri-containerd-c7476015fb17a6da643e47ba096078c91e045a091a8253ed500891b14e45cba5.scope - libcontainer container c7476015fb17a6da643e47ba096078c91e045a091a8253ed500891b14e45cba5. Apr 17 23:45:02.700623 containerd[1454]: time="2026-04-17T23:45:02.700554857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pts4n,Uid:a3b33be5-572b-4cc1-9962-39103e8862a4,Namespace:calico-system,Attempt:0,}" Apr 17 23:45:02.749409 containerd[1454]: time="2026-04-17T23:45:02.749169229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bbfb47978-qvw2w,Uid:9568ea1a-1062-43e1-a338-90ccc1329eba,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7476015fb17a6da643e47ba096078c91e045a091a8253ed500891b14e45cba5\"" Apr 17 23:45:02.753247 containerd[1454]: time="2026-04-17T23:45:02.753012049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:45:02.765983 containerd[1454]: time="2026-04-17T23:45:02.765505468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:02.765983 containerd[1454]: time="2026-04-17T23:45:02.765680421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:02.765983 containerd[1454]: time="2026-04-17T23:45:02.765717857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:02.766439 containerd[1454]: time="2026-04-17T23:45:02.765883911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:02.796017 systemd[1]: Started cri-containerd-fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6.scope - libcontainer container fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6. Apr 17 23:45:02.842066 containerd[1454]: time="2026-04-17T23:45:02.842005364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pts4n,Uid:a3b33be5-572b-4cc1-9962-39103e8862a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6\"" Apr 17 23:45:03.860261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463658156.mount: Deactivated successfully. Apr 17 23:45:04.283076 kubelet[2580]: E0417 23:45:04.282087 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:04.868030 containerd[1454]: time="2026-04-17T23:45:04.867960026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:04.869978 containerd[1454]: time="2026-04-17T23:45:04.869653604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:45:04.871811 containerd[1454]: time="2026-04-17T23:45:04.871584128Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:04.876393 containerd[1454]: time="2026-04-17T23:45:04.876204676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:04.877642 containerd[1454]: time="2026-04-17T23:45:04.877481954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.124421236s" Apr 17 23:45:04.877642 containerd[1454]: time="2026-04-17T23:45:04.877531810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:45:04.881792 containerd[1454]: time="2026-04-17T23:45:04.879984291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:45:04.906287 containerd[1454]: time="2026-04-17T23:45:04.906217041Z" level=info msg="CreateContainer within sandbox \"c7476015fb17a6da643e47ba096078c91e045a091a8253ed500891b14e45cba5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:45:04.930573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342611831.mount: Deactivated successfully. Apr 17 23:45:04.935813 containerd[1454]: time="2026-04-17T23:45:04.935519559Z" level=info msg="CreateContainer within sandbox \"c7476015fb17a6da643e47ba096078c91e045a091a8253ed500891b14e45cba5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a6aec7b26cd5c1d8378fdb4ed562991abf08b1582fc32b3eb381fedb2e1ef0b2\"" Apr 17 23:45:04.938626 containerd[1454]: time="2026-04-17T23:45:04.936614459Z" level=info msg="StartContainer for \"a6aec7b26cd5c1d8378fdb4ed562991abf08b1582fc32b3eb381fedb2e1ef0b2\"" Apr 17 23:45:05.038424 systemd[1]: Started cri-containerd-a6aec7b26cd5c1d8378fdb4ed562991abf08b1582fc32b3eb381fedb2e1ef0b2.scope - libcontainer container a6aec7b26cd5c1d8378fdb4ed562991abf08b1582fc32b3eb381fedb2e1ef0b2. Apr 17 23:45:05.150068 containerd[1454]: time="2026-04-17T23:45:05.149869211Z" level=info msg="StartContainer for \"a6aec7b26cd5c1d8378fdb4ed562991abf08b1582fc32b3eb381fedb2e1ef0b2\" returns successfully" Apr 17 23:45:05.529686 kubelet[2580]: E0417 23:45:05.529033 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.529686 kubelet[2580]: W0417 23:45:05.529066 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.529686 kubelet[2580]: E0417 23:45:05.529095 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.534805 kubelet[2580]: E0417 23:45:05.533063 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.534805 kubelet[2580]: W0417 23:45:05.533090 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.534805 kubelet[2580]: E0417 23:45:05.533234 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.534805 kubelet[2580]: E0417 23:45:05.534070 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.534805 kubelet[2580]: W0417 23:45:05.534088 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.535582 kubelet[2580]: E0417 23:45:05.535204 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.536302 kubelet[2580]: E0417 23:45:05.536279 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.537306 kubelet[2580]: W0417 23:45:05.536372 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.537306 kubelet[2580]: E0417 23:45:05.536399 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.539381 kubelet[2580]: E0417 23:45:05.538577 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.539381 kubelet[2580]: W0417 23:45:05.538596 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.539381 kubelet[2580]: E0417 23:45:05.538618 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.540930 kubelet[2580]: E0417 23:45:05.540393 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.540930 kubelet[2580]: W0417 23:45:05.540450 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.540930 kubelet[2580]: E0417 23:45:05.540470 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.542149 kubelet[2580]: E0417 23:45:05.541873 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.542149 kubelet[2580]: W0417 23:45:05.541893 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.542149 kubelet[2580]: E0417 23:45:05.541913 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.543257 kubelet[2580]: E0417 23:45:05.542864 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.543257 kubelet[2580]: W0417 23:45:05.542903 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.543257 kubelet[2580]: E0417 23:45:05.542921 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.543870 kubelet[2580]: E0417 23:45:05.543607 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.543870 kubelet[2580]: W0417 23:45:05.543624 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.543870 kubelet[2580]: E0417 23:45:05.543639 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.546442 kubelet[2580]: E0417 23:45:05.546172 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.546442 kubelet[2580]: W0417 23:45:05.546208 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.546442 kubelet[2580]: E0417 23:45:05.546226 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.547041 kubelet[2580]: E0417 23:45:05.546910 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.547041 kubelet[2580]: W0417 23:45:05.546932 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.547041 kubelet[2580]: E0417 23:45:05.546948 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.547779 kubelet[2580]: E0417 23:45:05.547593 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.547779 kubelet[2580]: W0417 23:45:05.547611 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.547779 kubelet[2580]: E0417 23:45:05.547627 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.548563 kubelet[2580]: E0417 23:45:05.548417 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.548563 kubelet[2580]: W0417 23:45:05.548460 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.548563 kubelet[2580]: E0417 23:45:05.548478 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.549331 kubelet[2580]: E0417 23:45:05.549202 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.549331 kubelet[2580]: W0417 23:45:05.549220 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.549331 kubelet[2580]: E0417 23:45:05.549254 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.550002 kubelet[2580]: E0417 23:45:05.549854 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.550002 kubelet[2580]: W0417 23:45:05.549876 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.550002 kubelet[2580]: E0417 23:45:05.549892 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.649601 kubelet[2580]: E0417 23:45:05.649393 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.649601 kubelet[2580]: W0417 23:45:05.649425 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.649601 kubelet[2580]: E0417 23:45:05.649451 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.650369 kubelet[2580]: E0417 23:45:05.650313 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.650369 kubelet[2580]: W0417 23:45:05.650332 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.650369 kubelet[2580]: E0417 23:45:05.650352 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.652094 kubelet[2580]: E0417 23:45:05.651889 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.652094 kubelet[2580]: W0417 23:45:05.651908 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.652094 kubelet[2580]: E0417 23:45:05.651926 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.652941 kubelet[2580]: E0417 23:45:05.652764 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.652941 kubelet[2580]: W0417 23:45:05.652794 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.652941 kubelet[2580]: E0417 23:45:05.652811 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.653434 kubelet[2580]: E0417 23:45:05.653260 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.653434 kubelet[2580]: W0417 23:45:05.653289 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.653434 kubelet[2580]: E0417 23:45:05.653307 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.653869 kubelet[2580]: E0417 23:45:05.653853 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.654585 kubelet[2580]: W0417 23:45:05.654560 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.654781 kubelet[2580]: E0417 23:45:05.654679 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.655311 kubelet[2580]: E0417 23:45:05.655180 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.655311 kubelet[2580]: W0417 23:45:05.655196 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.655311 kubelet[2580]: E0417 23:45:05.655212 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.658547 kubelet[2580]: E0417 23:45:05.658159 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.658547 kubelet[2580]: W0417 23:45:05.658178 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.658547 kubelet[2580]: E0417 23:45:05.658193 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.660150 kubelet[2580]: E0417 23:45:05.660133 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.660356 kubelet[2580]: W0417 23:45:05.660245 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.660356 kubelet[2580]: E0417 23:45:05.660267 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.660913 kubelet[2580]: E0417 23:45:05.660792 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.660913 kubelet[2580]: W0417 23:45:05.660809 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.660913 kubelet[2580]: E0417 23:45:05.660825 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.662608 kubelet[2580]: E0417 23:45:05.662395 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.662608 kubelet[2580]: W0417 23:45:05.662414 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.662608 kubelet[2580]: E0417 23:45:05.662432 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.663059 kubelet[2580]: E0417 23:45:05.662975 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.663059 kubelet[2580]: W0417 23:45:05.662994 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.663059 kubelet[2580]: E0417 23:45:05.663012 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.665238 kubelet[2580]: E0417 23:45:05.665113 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.665238 kubelet[2580]: W0417 23:45:05.665133 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.665238 kubelet[2580]: E0417 23:45:05.665150 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.666023 kubelet[2580]: E0417 23:45:05.665856 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.666023 kubelet[2580]: W0417 23:45:05.665875 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.666023 kubelet[2580]: E0417 23:45:05.665891 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.666610 kubelet[2580]: E0417 23:45:05.666594 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.666701 kubelet[2580]: W0417 23:45:05.666688 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.666833 kubelet[2580]: E0417 23:45:05.666818 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.668028 kubelet[2580]: E0417 23:45:05.668009 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.668245 kubelet[2580]: W0417 23:45:05.668134 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.668245 kubelet[2580]: E0417 23:45:05.668155 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.669866 kubelet[2580]: E0417 23:45:05.669782 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.669866 kubelet[2580]: W0417 23:45:05.669804 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.669866 kubelet[2580]: E0417 23:45:05.669820 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.670868 kubelet[2580]: E0417 23:45:05.670795 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:45:05.670868 kubelet[2580]: W0417 23:45:05.670814 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:45:05.670868 kubelet[2580]: E0417 23:45:05.670830 2580 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:45:05.925554 containerd[1454]: time="2026-04-17T23:45:05.925482444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:05.927201 containerd[1454]: time="2026-04-17T23:45:05.927112771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:45:05.929026 containerd[1454]: time="2026-04-17T23:45:05.928937246Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:05.932999 containerd[1454]: time="2026-04-17T23:45:05.932927338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:05.934559 containerd[1454]: time="2026-04-17T23:45:05.934010802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.053979112s" Apr 17 23:45:05.934559 containerd[1454]: time="2026-04-17T23:45:05.934065074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:45:05.940043 containerd[1454]: time="2026-04-17T23:45:05.939993684Z" level=info msg="CreateContainer within sandbox \"fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:45:05.966324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2062224155.mount: Deactivated successfully. Apr 17 23:45:05.974104 containerd[1454]: time="2026-04-17T23:45:05.974041697Z" level=info msg="CreateContainer within sandbox \"fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d24cb9172744c6f9a7efcdb02368c0ca1c9f1b52bba40b1780e04522dea9692e\"" Apr 17 23:45:05.976780 containerd[1454]: time="2026-04-17T23:45:05.974830835Z" level=info msg="StartContainer for \"d24cb9172744c6f9a7efcdb02368c0ca1c9f1b52bba40b1780e04522dea9692e\"" Apr 17 23:45:06.032059 systemd[1]: Started cri-containerd-d24cb9172744c6f9a7efcdb02368c0ca1c9f1b52bba40b1780e04522dea9692e.scope - libcontainer container d24cb9172744c6f9a7efcdb02368c0ca1c9f1b52bba40b1780e04522dea9692e. Apr 17 23:45:06.077574 containerd[1454]: time="2026-04-17T23:45:06.077509058Z" level=info msg="StartContainer for \"d24cb9172744c6f9a7efcdb02368c0ca1c9f1b52bba40b1780e04522dea9692e\" returns successfully" Apr 17 23:45:06.092274 systemd[1]: cri-containerd-d24cb9172744c6f9a7efcdb02368c0ca1c9f1b52bba40b1780e04522dea9692e.scope: Deactivated successfully. Apr 17 23:45:06.282968 kubelet[2580]: E0417 23:45:06.282790 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:06.447823 kubelet[2580]: I0417 23:45:06.447780 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:45:06.472111 kubelet[2580]: I0417 23:45:06.471712 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bbfb47978-qvw2w" podStartSLOduration=2.343777186 podStartE2EDuration="4.471407277s" podCreationTimestamp="2026-04-17 23:45:02 +0000 UTC" firstStartedPulling="2026-04-17 23:45:02.751269941 +0000 UTC m=+21.648908350" lastFinishedPulling="2026-04-17 23:45:04.878900015 +0000 UTC m=+23.776538441" observedRunningTime="2026-04-17 23:45:05.518610857 +0000 UTC m=+24.416249327" watchObservedRunningTime="2026-04-17 23:45:06.471407277 +0000 UTC m=+25.369045709" Apr 17 23:45:06.891439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d24cb9172744c6f9a7efcdb02368c0ca1c9f1b52bba40b1780e04522dea9692e-rootfs.mount: Deactivated successfully. Apr 17 23:45:07.019921 containerd[1454]: time="2026-04-17T23:45:07.019834526Z" level=info msg="shim disconnected" id=d24cb9172744c6f9a7efcdb02368c0ca1c9f1b52bba40b1780e04522dea9692e namespace=k8s.io Apr 17 23:45:07.019921 containerd[1454]: time="2026-04-17T23:45:07.019912882Z" level=warning msg="cleaning up after shim disconnected" id=d24cb9172744c6f9a7efcdb02368c0ca1c9f1b52bba40b1780e04522dea9692e namespace=k8s.io Apr 17 23:45:07.019921 containerd[1454]: time="2026-04-17T23:45:07.019928140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:45:07.454227 containerd[1454]: time="2026-04-17T23:45:07.454040403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:45:08.282227 kubelet[2580]: E0417 23:45:08.282147 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:10.283628 kubelet[2580]: E0417 23:45:10.283321 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:12.283235 kubelet[2580]: E0417 23:45:12.283170 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:14.282529 kubelet[2580]: E0417 23:45:14.282392 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:14.517232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274321861.mount: Deactivated successfully. Apr 17 23:45:14.554606 containerd[1454]: time="2026-04-17T23:45:14.554432999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:14.556728 containerd[1454]: time="2026-04-17T23:45:14.556625373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:45:14.558517 containerd[1454]: time="2026-04-17T23:45:14.558428920Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:14.564051 containerd[1454]: time="2026-04-17T23:45:14.563941717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:14.565783 containerd[1454]: time="2026-04-17T23:45:14.564964638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.11087161s" Apr 17 23:45:14.565783 containerd[1454]: time="2026-04-17T23:45:14.565016114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:45:14.572127 containerd[1454]: time="2026-04-17T23:45:14.572071158Z" level=info msg="CreateContainer within sandbox \"fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:45:14.602279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222100282.mount: Deactivated successfully. Apr 17 23:45:14.606066 containerd[1454]: time="2026-04-17T23:45:14.605997331Z" level=info msg="CreateContainer within sandbox \"fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"5895df32baa7ab884392769ed8b45e5a5afe899c1449f626beb9e828808ee629\"" Apr 17 23:45:14.607308 containerd[1454]: time="2026-04-17T23:45:14.607135165Z" level=info msg="StartContainer for \"5895df32baa7ab884392769ed8b45e5a5afe899c1449f626beb9e828808ee629\"" Apr 17 23:45:14.660986 systemd[1]: Started cri-containerd-5895df32baa7ab884392769ed8b45e5a5afe899c1449f626beb9e828808ee629.scope - libcontainer container 5895df32baa7ab884392769ed8b45e5a5afe899c1449f626beb9e828808ee629. Apr 17 23:45:14.708662 containerd[1454]: time="2026-04-17T23:45:14.708498078Z" level=info msg="StartContainer for \"5895df32baa7ab884392769ed8b45e5a5afe899c1449f626beb9e828808ee629\" returns successfully" Apr 17 23:45:14.766896 systemd[1]: cri-containerd-5895df32baa7ab884392769ed8b45e5a5afe899c1449f626beb9e828808ee629.scope: Deactivated successfully. Apr 17 23:45:15.448432 kubelet[2580]: I0417 23:45:15.448165 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:45:15.522500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5895df32baa7ab884392769ed8b45e5a5afe899c1449f626beb9e828808ee629-rootfs.mount: Deactivated successfully. Apr 17 23:45:16.282194 kubelet[2580]: E0417 23:45:16.281969 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:16.396884 containerd[1454]: time="2026-04-17T23:45:16.396766329Z" level=info msg="shim disconnected" id=5895df32baa7ab884392769ed8b45e5a5afe899c1449f626beb9e828808ee629 namespace=k8s.io Apr 17 23:45:16.396884 containerd[1454]: time="2026-04-17T23:45:16.396866337Z" level=warning msg="cleaning up after shim disconnected" id=5895df32baa7ab884392769ed8b45e5a5afe899c1449f626beb9e828808ee629 namespace=k8s.io Apr 17 23:45:16.396884 containerd[1454]: time="2026-04-17T23:45:16.396884400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:45:16.485141 containerd[1454]: time="2026-04-17T23:45:16.485087506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:45:18.282031 kubelet[2580]: E0417 23:45:18.281947 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:20.282182 kubelet[2580]: E0417 23:45:20.282103 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:22.282519 kubelet[2580]: E0417 23:45:22.282453 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:22.667873 containerd[1454]: time="2026-04-17T23:45:22.667799225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:22.669473 containerd[1454]: time="2026-04-17T23:45:22.669401038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:45:22.671383 containerd[1454]: time="2026-04-17T23:45:22.671149598Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:22.674634 containerd[1454]: time="2026-04-17T23:45:22.674531304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:22.675735 containerd[1454]: time="2026-04-17T23:45:22.675656947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 6.190510812s" Apr 17 23:45:22.675735 containerd[1454]: time="2026-04-17T23:45:22.675706327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:45:22.682821 containerd[1454]: time="2026-04-17T23:45:22.682733217Z" level=info msg="CreateContainer within sandbox \"fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:45:22.716051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601163502.mount: Deactivated successfully. Apr 17 23:45:22.723542 containerd[1454]: time="2026-04-17T23:45:22.723301306Z" level=info msg="CreateContainer within sandbox \"fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d0441ee979bf9744f01688171c323e3f8b50f096e8f5170a76599869504dfe00\"" Apr 17 23:45:22.724172 containerd[1454]: time="2026-04-17T23:45:22.724102619Z" level=info msg="StartContainer for \"d0441ee979bf9744f01688171c323e3f8b50f096e8f5170a76599869504dfe00\"" Apr 17 23:45:22.776112 systemd[1]: Started cri-containerd-d0441ee979bf9744f01688171c323e3f8b50f096e8f5170a76599869504dfe00.scope - libcontainer container d0441ee979bf9744f01688171c323e3f8b50f096e8f5170a76599869504dfe00. Apr 17 23:45:22.820103 containerd[1454]: time="2026-04-17T23:45:22.820016641Z" level=info msg="StartContainer for \"d0441ee979bf9744f01688171c323e3f8b50f096e8f5170a76599869504dfe00\" returns successfully" Apr 17 23:45:23.191903 systemd[1]: Started sshd@7-10.128.0.99:22-185.114.206.48:43646.service - OpenSSH per-connection server daemon (185.114.206.48:43646). Apr 17 23:45:23.737234 sshd[3407]: Invalid user admin from 185.114.206.48 port 43646 Apr 17 23:45:23.811822 sshd[3407]: Connection closed by invalid user admin 185.114.206.48 port 43646 [preauth] Apr 17 23:45:23.817759 systemd[1]: sshd@7-10.128.0.99:22-185.114.206.48:43646.service: Deactivated successfully. Apr 17 23:45:23.940031 systemd[1]: cri-containerd-d0441ee979bf9744f01688171c323e3f8b50f096e8f5170a76599869504dfe00.scope: Deactivated successfully. Apr 17 23:45:23.978543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0441ee979bf9744f01688171c323e3f8b50f096e8f5170a76599869504dfe00-rootfs.mount: Deactivated successfully. Apr 17 23:45:24.003433 kubelet[2580]: I0417 23:45:24.003293 2580 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:45:24.308874 kubelet[2580]: I0417 23:45:24.308655 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12f85385-d410-47a5-885e-f33eab72be77-config-volume\") pod \"coredns-674b8bbfcf-fddcj\" (UID: \"12f85385-d410-47a5-885e-f33eab72be77\") " pod="kube-system/coredns-674b8bbfcf-fddcj" Apr 17 23:45:24.308874 kubelet[2580]: I0417 23:45:24.308725 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgqs2\" (UniqueName: \"kubernetes.io/projected/12f85385-d410-47a5-885e-f33eab72be77-kube-api-access-kgqs2\") pod \"coredns-674b8bbfcf-fddcj\" (UID: \"12f85385-d410-47a5-885e-f33eab72be77\") " pod="kube-system/coredns-674b8bbfcf-fddcj" Apr 17 23:45:24.410352 kubelet[2580]: E0417 23:45:24.409941 2580 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered Apr 17 23:45:24.410352 kubelet[2580]: E0417 23:45:24.410055 2580 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12f85385-d410-47a5-885e-f33eab72be77-config-volume podName:12f85385-d410-47a5-885e-f33eab72be77 nodeName:}" failed. No retries permitted until 2026-04-17 23:45:24.910031226 +0000 UTC m=+43.807669652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/12f85385-d410-47a5-885e-f33eab72be77-config-volume") pod "coredns-674b8bbfcf-fddcj" (UID: "12f85385-d410-47a5-885e-f33eab72be77") : object "kube-system"/"coredns" not registered Apr 17 23:45:24.492438 systemd[1]: Created slice kubepods-burstable-pod12f85385_d410_47a5_885e_f33eab72be77.slice - libcontainer container kubepods-burstable-pod12f85385_d410_47a5_885e_f33eab72be77.slice. Apr 17 23:45:24.506446 systemd[1]: Created slice kubepods-burstable-pod061fc661_7623_4bb3_8cee_51fca4a6f0d4.slice - libcontainer container kubepods-burstable-pod061fc661_7623_4bb3_8cee_51fca4a6f0d4.slice. Apr 17 23:45:24.510765 kubelet[2580]: I0417 23:45:24.510027 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqwfd\" (UniqueName: \"kubernetes.io/projected/061fc661-7623-4bb3-8cee-51fca4a6f0d4-kube-api-access-fqwfd\") pod \"coredns-674b8bbfcf-dt4x5\" (UID: \"061fc661-7623-4bb3-8cee-51fca4a6f0d4\") " pod="kube-system/coredns-674b8bbfcf-dt4x5" Apr 17 23:45:24.510765 kubelet[2580]: I0417 23:45:24.510124 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/061fc661-7623-4bb3-8cee-51fca4a6f0d4-config-volume\") pod \"coredns-674b8bbfcf-dt4x5\" (UID: \"061fc661-7623-4bb3-8cee-51fca4a6f0d4\") " pod="kube-system/coredns-674b8bbfcf-dt4x5" Apr 17 23:45:24.536912 containerd[1454]: time="2026-04-17T23:45:24.536484881Z" level=info msg="shim disconnected" id=d0441ee979bf9744f01688171c323e3f8b50f096e8f5170a76599869504dfe00 namespace=k8s.io Apr 17 23:45:24.536912 containerd[1454]: time="2026-04-17T23:45:24.536590845Z" level=warning msg="cleaning up after shim disconnected" id=d0441ee979bf9744f01688171c323e3f8b50f096e8f5170a76599869504dfe00 namespace=k8s.io Apr 17 23:45:24.536912 containerd[1454]: time="2026-04-17T23:45:24.536608072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:45:24.561608 systemd[1]: Created slice kubepods-besteffort-podf7b55c10_3440_4ee3_957d_61f422c06f09.slice - libcontainer container kubepods-besteffort-podf7b55c10_3440_4ee3_957d_61f422c06f09.slice. Apr 17 23:45:24.589549 systemd[1]: Created slice kubepods-besteffort-pod175da1ed_b0db_4d24_bad8_f8db619e26a8.slice - libcontainer container kubepods-besteffort-pod175da1ed_b0db_4d24_bad8_f8db619e26a8.slice. Apr 17 23:45:24.599982 containerd[1454]: time="2026-04-17T23:45:24.598307362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q4qpd,Uid:175da1ed-b0db-4d24-bad8-f8db619e26a8,Namespace:calico-system,Attempt:0,}" Apr 17 23:45:24.609643 systemd[1]: Created slice kubepods-besteffort-pod31cae8c7_c29a_40c1_b51e_df324d3ffd96.slice - libcontainer container kubepods-besteffort-pod31cae8c7_c29a_40c1_b51e_df324d3ffd96.slice. Apr 17 23:45:24.610426 kubelet[2580]: I0417 23:45:24.610372 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2cf\" (UniqueName: \"kubernetes.io/projected/e5226537-bc0c-470a-98d4-4745df18b74f-kube-api-access-tn2cf\") pod \"calico-apiserver-64fd9bf59-fr8st\" (UID: \"e5226537-bc0c-470a-98d4-4745df18b74f\") " pod="calico-system/calico-apiserver-64fd9bf59-fr8st" Apr 17 23:45:24.610426 kubelet[2580]: I0417 23:45:24.610418 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31cae8c7-c29a-40c1-b51e-df324d3ffd96-config\") pod \"goldmane-5b85766d88-57wrg\" (UID: \"31cae8c7-c29a-40c1-b51e-df324d3ffd96\") " pod="calico-system/goldmane-5b85766d88-57wrg" Apr 17 23:45:24.610571 kubelet[2580]: I0417 23:45:24.610447 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8575dd81-0a24-4ece-99ca-0578256ac1d0-whisker-backend-key-pair\") pod \"whisker-5ff9bf5f47-pb5tr\" (UID: \"8575dd81-0a24-4ece-99ca-0578256ac1d0\") " pod="calico-system/whisker-5ff9bf5f47-pb5tr" Apr 17 23:45:24.610571 kubelet[2580]: I0417 23:45:24.610521 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlpd7\" (UniqueName: \"kubernetes.io/projected/8575dd81-0a24-4ece-99ca-0578256ac1d0-kube-api-access-zlpd7\") pod \"whisker-5ff9bf5f47-pb5tr\" (UID: \"8575dd81-0a24-4ece-99ca-0578256ac1d0\") " pod="calico-system/whisker-5ff9bf5f47-pb5tr" Apr 17 23:45:24.610684 kubelet[2580]: I0417 23:45:24.610576 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94nxb\" (UniqueName: \"kubernetes.io/projected/f7b55c10-3440-4ee3-957d-61f422c06f09-kube-api-access-94nxb\") pod \"calico-kube-controllers-76dd74c988-x7l4x\" (UID: \"f7b55c10-3440-4ee3-957d-61f422c06f09\") " pod="calico-system/calico-kube-controllers-76dd74c988-x7l4x" Apr 17 23:45:24.610684 kubelet[2580]: I0417 23:45:24.610611 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wsm2\" (UniqueName: \"kubernetes.io/projected/31cae8c7-c29a-40c1-b51e-df324d3ffd96-kube-api-access-9wsm2\") pod \"goldmane-5b85766d88-57wrg\" (UID: \"31cae8c7-c29a-40c1-b51e-df324d3ffd96\") " pod="calico-system/goldmane-5b85766d88-57wrg" Apr 17 23:45:24.613979 kubelet[2580]: I0417 23:45:24.611210 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b55c10-3440-4ee3-957d-61f422c06f09-tigera-ca-bundle\") pod \"calico-kube-controllers-76dd74c988-x7l4x\" (UID: \"f7b55c10-3440-4ee3-957d-61f422c06f09\") " pod="calico-system/calico-kube-controllers-76dd74c988-x7l4x" Apr 17 23:45:24.613979 kubelet[2580]: I0417 23:45:24.611281 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e5226537-bc0c-470a-98d4-4745df18b74f-calico-apiserver-certs\") pod \"calico-apiserver-64fd9bf59-fr8st\" (UID: \"e5226537-bc0c-470a-98d4-4745df18b74f\") " pod="calico-system/calico-apiserver-64fd9bf59-fr8st" Apr 17 23:45:24.613979 kubelet[2580]: I0417 23:45:24.611314 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8575dd81-0a24-4ece-99ca-0578256ac1d0-nginx-config\") pod \"whisker-5ff9bf5f47-pb5tr\" (UID: \"8575dd81-0a24-4ece-99ca-0578256ac1d0\") " pod="calico-system/whisker-5ff9bf5f47-pb5tr" Apr 17 23:45:24.613979 kubelet[2580]: I0417 23:45:24.611390 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c26d0a75-f7af-4717-af6f-93f123500133-calico-apiserver-certs\") pod \"calico-apiserver-64fd9bf59-bf5g9\" (UID: \"c26d0a75-f7af-4717-af6f-93f123500133\") " pod="calico-system/calico-apiserver-64fd9bf59-bf5g9" Apr 17 23:45:24.613979 kubelet[2580]: I0417 23:45:24.611450 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31cae8c7-c29a-40c1-b51e-df324d3ffd96-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-57wrg\" (UID: \"31cae8c7-c29a-40c1-b51e-df324d3ffd96\") " pod="calico-system/goldmane-5b85766d88-57wrg" Apr 17 23:45:24.614345 kubelet[2580]: I0417 23:45:24.611486 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sssmj\" (UniqueName: \"kubernetes.io/projected/c26d0a75-f7af-4717-af6f-93f123500133-kube-api-access-sssmj\") pod \"calico-apiserver-64fd9bf59-bf5g9\" (UID: \"c26d0a75-f7af-4717-af6f-93f123500133\") " pod="calico-system/calico-apiserver-64fd9bf59-bf5g9" Apr 17 23:45:24.614345 kubelet[2580]: I0417 23:45:24.611539 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/31cae8c7-c29a-40c1-b51e-df324d3ffd96-goldmane-key-pair\") pod \"goldmane-5b85766d88-57wrg\" (UID: \"31cae8c7-c29a-40c1-b51e-df324d3ffd96\") " pod="calico-system/goldmane-5b85766d88-57wrg" Apr 17 23:45:24.614345 kubelet[2580]: I0417 23:45:24.611568 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8575dd81-0a24-4ece-99ca-0578256ac1d0-whisker-ca-bundle\") pod \"whisker-5ff9bf5f47-pb5tr\" (UID: \"8575dd81-0a24-4ece-99ca-0578256ac1d0\") " pod="calico-system/whisker-5ff9bf5f47-pb5tr" Apr 17 23:45:24.658255 systemd[1]: Created slice kubepods-besteffort-pode5226537_bc0c_470a_98d4_4745df18b74f.slice - libcontainer container kubepods-besteffort-pode5226537_bc0c_470a_98d4_4745df18b74f.slice. Apr 17 23:45:24.689974 systemd[1]: Created slice kubepods-besteffort-pod8575dd81_0a24_4ece_99ca_0578256ac1d0.slice - libcontainer container kubepods-besteffort-pod8575dd81_0a24_4ece_99ca_0578256ac1d0.slice. Apr 17 23:45:24.708038 systemd[1]: Created slice kubepods-besteffort-podc26d0a75_f7af_4717_af6f_93f123500133.slice - libcontainer container kubepods-besteffort-podc26d0a75_f7af_4717_af6f_93f123500133.slice. Apr 17 23:45:24.821629 containerd[1454]: time="2026-04-17T23:45:24.819781422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dt4x5,Uid:061fc661-7623-4bb3-8cee-51fca4a6f0d4,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:24.842315 containerd[1454]: time="2026-04-17T23:45:24.842239866Z" level=error msg="Failed to destroy network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:24.842816 containerd[1454]: time="2026-04-17T23:45:24.842745880Z" level=error msg="encountered an error cleaning up failed sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:24.842932 containerd[1454]: time="2026-04-17T23:45:24.842866614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q4qpd,Uid:175da1ed-b0db-4d24-bad8-f8db619e26a8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:24.843331 kubelet[2580]: E0417 23:45:24.843184 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:24.843452 kubelet[2580]: E0417 23:45:24.843356 2580 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q4qpd" Apr 17 23:45:24.843452 kubelet[2580]: E0417 23:45:24.843395 2580 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q4qpd" Apr 17 23:45:24.843825 kubelet[2580]: E0417 23:45:24.843499 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q4qpd_calico-system(175da1ed-b0db-4d24-bad8-f8db619e26a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q4qpd_calico-system(175da1ed-b0db-4d24-bad8-f8db619e26a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:24.880017 containerd[1454]: time="2026-04-17T23:45:24.879377583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76dd74c988-x7l4x,Uid:f7b55c10-3440-4ee3-957d-61f422c06f09,Namespace:calico-system,Attempt:0,}" Apr 17 23:45:24.929290 containerd[1454]: time="2026-04-17T23:45:24.928736830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-57wrg,Uid:31cae8c7-c29a-40c1-b51e-df324d3ffd96,Namespace:calico-system,Attempt:0,}" Apr 17 23:45:24.932579 containerd[1454]: time="2026-04-17T23:45:24.932514423Z" level=error msg="Failed to destroy network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:24.933236 containerd[1454]: time="2026-04-17T23:45:24.933185303Z" level=error msg="encountered an error cleaning up failed sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:24.933956 containerd[1454]: time="2026-04-17T23:45:24.933904998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dt4x5,Uid:061fc661-7623-4bb3-8cee-51fca4a6f0d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:24.935946 kubelet[2580]: E0417 23:45:24.935143 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:24.935946 kubelet[2580]: E0417 23:45:24.935226 2580 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dt4x5" Apr 17 23:45:24.935946 kubelet[2580]: E0417 23:45:24.935263 2580 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dt4x5" Apr 17 23:45:24.936214 kubelet[2580]: E0417 23:45:24.935342 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dt4x5_kube-system(061fc661-7623-4bb3-8cee-51fca4a6f0d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dt4x5_kube-system(061fc661-7623-4bb3-8cee-51fca4a6f0d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dt4x5" podUID="061fc661-7623-4bb3-8cee-51fca4a6f0d4" Apr 17 23:45:24.983912 containerd[1454]: time="2026-04-17T23:45:24.982565319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fd9bf59-fr8st,Uid:e5226537-bc0c-470a-98d4-4745df18b74f,Namespace:calico-system,Attempt:0,}" Apr 17 23:45:25.001527 containerd[1454]: time="2026-04-17T23:45:25.000865037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ff9bf5f47-pb5tr,Uid:8575dd81-0a24-4ece-99ca-0578256ac1d0,Namespace:calico-system,Attempt:0,}" Apr 17 23:45:25.022270 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8-shm.mount: Deactivated successfully. Apr 17 23:45:25.026356 containerd[1454]: time="2026-04-17T23:45:25.022457145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fd9bf59-bf5g9,Uid:c26d0a75-f7af-4717-af6f-93f123500133,Namespace:calico-system,Attempt:0,}" Apr 17 23:45:25.102947 containerd[1454]: time="2026-04-17T23:45:25.102083915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fddcj,Uid:12f85385-d410-47a5-885e-f33eab72be77,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:25.129333 containerd[1454]: time="2026-04-17T23:45:25.129278975Z" level=error msg="Failed to destroy network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.130135 containerd[1454]: time="2026-04-17T23:45:25.130060591Z" level=error msg="encountered an error cleaning up failed sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.130655 containerd[1454]: time="2026-04-17T23:45:25.130405789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76dd74c988-x7l4x,Uid:f7b55c10-3440-4ee3-957d-61f422c06f09,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.131134 kubelet[2580]: E0417 23:45:25.131077 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.133653 kubelet[2580]: E0417 23:45:25.131166 2580 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76dd74c988-x7l4x" Apr 17 23:45:25.133653 kubelet[2580]: E0417 23:45:25.131204 2580 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76dd74c988-x7l4x" Apr 17 23:45:25.133653 kubelet[2580]: E0417 23:45:25.131290 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76dd74c988-x7l4x_calico-system(f7b55c10-3440-4ee3-957d-61f422c06f09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76dd74c988-x7l4x_calico-system(f7b55c10-3440-4ee3-957d-61f422c06f09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76dd74c988-x7l4x" podUID="f7b55c10-3440-4ee3-957d-61f422c06f09" Apr 17 23:45:25.218613 containerd[1454]: time="2026-04-17T23:45:25.217532429Z" level=error msg="Failed to destroy network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.218613 containerd[1454]: time="2026-04-17T23:45:25.218305672Z" level=error msg="encountered an error cleaning up failed sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.218613 containerd[1454]: time="2026-04-17T23:45:25.218448254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-57wrg,Uid:31cae8c7-c29a-40c1-b51e-df324d3ffd96,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.219098 kubelet[2580]: E0417 23:45:25.218797 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.219098 kubelet[2580]: E0417 23:45:25.218871 2580 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-57wrg" Apr 17 23:45:25.219098 kubelet[2580]: E0417 23:45:25.218903 2580 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-57wrg" Apr 17 23:45:25.219277 kubelet[2580]: E0417 23:45:25.218987 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-57wrg_calico-system(31cae8c7-c29a-40c1-b51e-df324d3ffd96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-57wrg_calico-system(31cae8c7-c29a-40c1-b51e-df324d3ffd96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-57wrg" podUID="31cae8c7-c29a-40c1-b51e-df324d3ffd96" Apr 17 23:45:25.316811 containerd[1454]: time="2026-04-17T23:45:25.316140386Z" level=error msg="Failed to destroy network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.317666 containerd[1454]: time="2026-04-17T23:45:25.316980495Z" level=error msg="encountered an error cleaning up failed sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.317666 containerd[1454]: time="2026-04-17T23:45:25.317451721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ff9bf5f47-pb5tr,Uid:8575dd81-0a24-4ece-99ca-0578256ac1d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.318014 kubelet[2580]: E0417 23:45:25.317796 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.318014 kubelet[2580]: E0417 23:45:25.317871 2580 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ff9bf5f47-pb5tr" Apr 17 23:45:25.318014 kubelet[2580]: E0417 23:45:25.317906 2580 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5ff9bf5f47-pb5tr" Apr 17 23:45:25.318904 kubelet[2580]: E0417 23:45:25.317975 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5ff9bf5f47-pb5tr_calico-system(8575dd81-0a24-4ece-99ca-0578256ac1d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5ff9bf5f47-pb5tr_calico-system(8575dd81-0a24-4ece-99ca-0578256ac1d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ff9bf5f47-pb5tr" podUID="8575dd81-0a24-4ece-99ca-0578256ac1d0" Apr 17 23:45:25.332654 containerd[1454]: time="2026-04-17T23:45:25.332587248Z" level=error msg="Failed to destroy network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.333191 containerd[1454]: time="2026-04-17T23:45:25.333141051Z" level=error msg="encountered an error cleaning up failed sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.334150 containerd[1454]: time="2026-04-17T23:45:25.333234021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fd9bf59-fr8st,Uid:e5226537-bc0c-470a-98d4-4745df18b74f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.334965 kubelet[2580]: E0417 23:45:25.333575 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.334965 kubelet[2580]: E0417 23:45:25.333789 2580 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64fd9bf59-fr8st" Apr 17 23:45:25.334965 kubelet[2580]: E0417 23:45:25.333827 2580 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64fd9bf59-fr8st" Apr 17 23:45:25.335155 kubelet[2580]: E0417 23:45:25.333952 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64fd9bf59-fr8st_calico-system(e5226537-bc0c-470a-98d4-4745df18b74f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64fd9bf59-fr8st_calico-system(e5226537-bc0c-470a-98d4-4745df18b74f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64fd9bf59-fr8st" podUID="e5226537-bc0c-470a-98d4-4745df18b74f" Apr 17 23:45:25.360840 containerd[1454]: time="2026-04-17T23:45:25.358962493Z" level=error msg="Failed to destroy network for sandbox \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.360840 containerd[1454]: time="2026-04-17T23:45:25.359438024Z" level=error msg="encountered an error cleaning up failed sandbox \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.360840 containerd[1454]: time="2026-04-17T23:45:25.359513132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fddcj,Uid:12f85385-d410-47a5-885e-f33eab72be77,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.361736 kubelet[2580]: E0417 23:45:25.361405 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.363827 kubelet[2580]: E0417 23:45:25.363745 2580 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fddcj" Apr 17 23:45:25.365182 kubelet[2580]: E0417 23:45:25.363986 2580 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fddcj" Apr 17 23:45:25.365182 kubelet[2580]: E0417 23:45:25.364069 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fddcj_kube-system(12f85385-d410-47a5-885e-f33eab72be77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fddcj_kube-system(12f85385-d410-47a5-885e-f33eab72be77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fddcj" podUID="12f85385-d410-47a5-885e-f33eab72be77" Apr 17 23:45:25.373224 containerd[1454]: time="2026-04-17T23:45:25.373161254Z" level=error msg="Failed to destroy network for sandbox \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.373652 containerd[1454]: time="2026-04-17T23:45:25.373588351Z" level=error msg="encountered an error cleaning up failed sandbox \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.373810 containerd[1454]: time="2026-04-17T23:45:25.373674465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fd9bf59-bf5g9,Uid:c26d0a75-f7af-4717-af6f-93f123500133,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.374032 kubelet[2580]: E0417 23:45:25.373980 2580 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.374112 kubelet[2580]: E0417 23:45:25.374056 2580 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64fd9bf59-bf5g9" Apr 17 23:45:25.374112 kubelet[2580]: E0417 23:45:25.374092 2580 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-64fd9bf59-bf5g9" Apr 17 23:45:25.374219 kubelet[2580]: E0417 23:45:25.374163 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64fd9bf59-bf5g9_calico-system(c26d0a75-f7af-4717-af6f-93f123500133)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64fd9bf59-bf5g9_calico-system(c26d0a75-f7af-4717-af6f-93f123500133)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64fd9bf59-bf5g9" podUID="c26d0a75-f7af-4717-af6f-93f123500133" Apr 17 23:45:25.515284 kubelet[2580]: I0417 23:45:25.515222 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" Apr 17 23:45:25.516449 containerd[1454]: time="2026-04-17T23:45:25.516394763Z" level=info msg="StopPodSandbox for \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\"" Apr 17 23:45:25.517039 containerd[1454]: time="2026-04-17T23:45:25.516662339Z" level=info msg="Ensure that sandbox cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b in task-service has been cleanup successfully" Apr 17 23:45:25.521492 kubelet[2580]: I0417 23:45:25.521284 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:25.524413 containerd[1454]: time="2026-04-17T23:45:25.523409765Z" level=info msg="StopPodSandbox for \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\"" Apr 17 23:45:25.524526 containerd[1454]: time="2026-04-17T23:45:25.524494706Z" level=info msg="Ensure that sandbox 83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77 in task-service has been cleanup successfully" Apr 17 23:45:25.528134 kubelet[2580]: I0417 23:45:25.527608 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:25.531222 containerd[1454]: time="2026-04-17T23:45:25.531158522Z" level=info msg="StopPodSandbox for \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\"" Apr 17 23:45:25.533494 containerd[1454]: time="2026-04-17T23:45:25.533135791Z" level=info msg="Ensure that sandbox 4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8 in task-service has been cleanup successfully" Apr 17 23:45:25.533992 kubelet[2580]: I0417 23:45:25.533963 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:25.541470 containerd[1454]: time="2026-04-17T23:45:25.541379666Z" level=info msg="StopPodSandbox for \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\"" Apr 17 23:45:25.547588 containerd[1454]: time="2026-04-17T23:45:25.545931469Z" level=info msg="Ensure that sandbox feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2 in task-service has been cleanup successfully" Apr 17 23:45:25.574784 kubelet[2580]: I0417 23:45:25.574502 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" Apr 17 23:45:25.578686 containerd[1454]: time="2026-04-17T23:45:25.577651848Z" level=info msg="StopPodSandbox for \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\"" Apr 17 23:45:25.578686 containerd[1454]: time="2026-04-17T23:45:25.577935969Z" level=info msg="Ensure that sandbox 7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f in task-service has been cleanup successfully" Apr 17 23:45:25.582662 kubelet[2580]: I0417 23:45:25.582626 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:25.589079 containerd[1454]: time="2026-04-17T23:45:25.587913387Z" level=info msg="StopPodSandbox for \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\"" Apr 17 23:45:25.589079 containerd[1454]: time="2026-04-17T23:45:25.588181869Z" level=info msg="Ensure that sandbox 6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e in task-service has been cleanup successfully" Apr 17 23:45:25.631604 kubelet[2580]: I0417 23:45:25.631561 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:25.632649 containerd[1454]: time="2026-04-17T23:45:25.632036939Z" level=info msg="CreateContainer within sandbox \"fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:45:25.640665 containerd[1454]: time="2026-04-17T23:45:25.640607741Z" level=info msg="StopPodSandbox for \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\"" Apr 17 23:45:25.641106 containerd[1454]: time="2026-04-17T23:45:25.640994233Z" level=info msg="Ensure that sandbox 95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3 in task-service has been cleanup successfully" Apr 17 23:45:25.704259 containerd[1454]: time="2026-04-17T23:45:25.703773296Z" level=error msg="StopPodSandbox for \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\" failed" error="failed to destroy network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.710463 kubelet[2580]: E0417 23:45:25.710406 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:25.710647 kubelet[2580]: E0417 23:45:25.710491 2580 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8"} Apr 17 23:45:25.710647 kubelet[2580]: E0417 23:45:25.710582 2580 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31cae8c7-c29a-40c1-b51e-df324d3ffd96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:45:25.710647 kubelet[2580]: E0417 23:45:25.710628 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31cae8c7-c29a-40c1-b51e-df324d3ffd96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-57wrg" podUID="31cae8c7-c29a-40c1-b51e-df324d3ffd96" Apr 17 23:45:25.710934 kubelet[2580]: I0417 23:45:25.710814 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:25.719516 containerd[1454]: time="2026-04-17T23:45:25.719082187Z" level=info msg="StopPodSandbox for \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\"" Apr 17 23:45:25.719516 containerd[1454]: time="2026-04-17T23:45:25.719443719Z" level=info msg="Ensure that sandbox 27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8 in task-service has been cleanup successfully" Apr 17 23:45:25.723694 containerd[1454]: time="2026-04-17T23:45:25.723431442Z" level=error msg="StopPodSandbox for \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\" failed" error="failed to destroy network for sandbox \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.724099 kubelet[2580]: E0417 23:45:25.723976 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" Apr 17 23:45:25.725282 kubelet[2580]: E0417 23:45:25.724099 2580 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f"} Apr 17 23:45:25.725282 kubelet[2580]: E0417 23:45:25.724181 2580 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c26d0a75-f7af-4717-af6f-93f123500133\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:45:25.725282 kubelet[2580]: E0417 23:45:25.724229 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c26d0a75-f7af-4717-af6f-93f123500133\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64fd9bf59-bf5g9" podUID="c26d0a75-f7af-4717-af6f-93f123500133" Apr 17 23:45:25.742355 containerd[1454]: time="2026-04-17T23:45:25.742279815Z" level=info msg="CreateContainer within sandbox \"fd0b2c69315a44fd3b888fe15bb7bbd7a2b9fafe5fd8bfa9d1024896d3942fd6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"df8799f2ee9f47667835f45f62a3ab0364e1996db6eacfb8a6312e880cb65aae\"" Apr 17 23:45:25.747691 containerd[1454]: time="2026-04-17T23:45:25.746994019Z" level=info msg="StartContainer for \"df8799f2ee9f47667835f45f62a3ab0364e1996db6eacfb8a6312e880cb65aae\"" Apr 17 23:45:25.791537 containerd[1454]: time="2026-04-17T23:45:25.791432169Z" level=error msg="StopPodSandbox for \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\" failed" error="failed to destroy network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.792067 kubelet[2580]: E0417 23:45:25.791983 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:25.792283 kubelet[2580]: E0417 23:45:25.792088 2580 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3"} Apr 17 23:45:25.792283 kubelet[2580]: E0417 23:45:25.792139 2580 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7b55c10-3440-4ee3-957d-61f422c06f09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:45:25.792283 kubelet[2580]: E0417 23:45:25.792173 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7b55c10-3440-4ee3-957d-61f422c06f09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76dd74c988-x7l4x" podUID="f7b55c10-3440-4ee3-957d-61f422c06f09" Apr 17 23:45:25.803150 containerd[1454]: time="2026-04-17T23:45:25.802955768Z" level=error msg="StopPodSandbox for \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\" failed" error="failed to destroy network for sandbox \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.803405 kubelet[2580]: E0417 23:45:25.803307 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" Apr 17 23:45:25.803405 kubelet[2580]: E0417 23:45:25.803384 2580 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b"} Apr 17 23:45:25.803704 kubelet[2580]: E0417 23:45:25.803434 2580 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12f85385-d410-47a5-885e-f33eab72be77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:45:25.803704 kubelet[2580]: E0417 23:45:25.803477 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12f85385-d410-47a5-885e-f33eab72be77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fddcj" podUID="12f85385-d410-47a5-885e-f33eab72be77" Apr 17 23:45:25.819991 containerd[1454]: time="2026-04-17T23:45:25.819927444Z" level=error msg="StopPodSandbox for \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\" failed" error="failed to destroy network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.820543 kubelet[2580]: E0417 23:45:25.820489 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:25.820724 kubelet[2580]: E0417 23:45:25.820561 2580 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77"} Apr 17 23:45:25.820724 kubelet[2580]: E0417 23:45:25.820611 2580 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5226537-bc0c-470a-98d4-4745df18b74f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:45:25.820724 kubelet[2580]: E0417 23:45:25.820660 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5226537-bc0c-470a-98d4-4745df18b74f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-64fd9bf59-fr8st" podUID="e5226537-bc0c-470a-98d4-4745df18b74f" Apr 17 23:45:25.828192 containerd[1454]: time="2026-04-17T23:45:25.828038229Z" level=error msg="StopPodSandbox for \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\" failed" error="failed to destroy network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.828532 kubelet[2580]: E0417 23:45:25.828405 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:25.828532 kubelet[2580]: E0417 23:45:25.828466 2580 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8"} Apr 17 23:45:25.828532 kubelet[2580]: E0417 23:45:25.828513 2580 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"175da1ed-b0db-4d24-bad8-f8db619e26a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:45:25.828925 kubelet[2580]: E0417 23:45:25.828547 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"175da1ed-b0db-4d24-bad8-f8db619e26a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q4qpd" podUID="175da1ed-b0db-4d24-bad8-f8db619e26a8" Apr 17 23:45:25.837276 containerd[1454]: time="2026-04-17T23:45:25.836886095Z" level=error msg="StopPodSandbox for \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\" failed" error="failed to destroy network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.837579 kubelet[2580]: E0417 23:45:25.837287 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:25.837579 kubelet[2580]: E0417 23:45:25.837362 2580 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2"} Apr 17 23:45:25.837579 kubelet[2580]: E0417 23:45:25.837413 2580 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"061fc661-7623-4bb3-8cee-51fca4a6f0d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:45:25.837579 kubelet[2580]: E0417 23:45:25.837451 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"061fc661-7623-4bb3-8cee-51fca4a6f0d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dt4x5" podUID="061fc661-7623-4bb3-8cee-51fca4a6f0d4" Apr 17 23:45:25.840853 containerd[1454]: time="2026-04-17T23:45:25.840683451Z" level=error msg="StopPodSandbox for \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\" failed" error="failed to destroy network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:45:25.841069 kubelet[2580]: E0417 23:45:25.840985 2580 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:25.841069 kubelet[2580]: E0417 23:45:25.841042 2580 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e"} Apr 17 23:45:25.841253 kubelet[2580]: E0417 23:45:25.841089 2580 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8575dd81-0a24-4ece-99ca-0578256ac1d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:45:25.841253 kubelet[2580]: E0417 23:45:25.841132 2580 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8575dd81-0a24-4ece-99ca-0578256ac1d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5ff9bf5f47-pb5tr" podUID="8575dd81-0a24-4ece-99ca-0578256ac1d0" Apr 17 23:45:25.865070 systemd[1]: Started cri-containerd-df8799f2ee9f47667835f45f62a3ab0364e1996db6eacfb8a6312e880cb65aae.scope - libcontainer container df8799f2ee9f47667835f45f62a3ab0364e1996db6eacfb8a6312e880cb65aae. Apr 17 23:45:25.908040 containerd[1454]: time="2026-04-17T23:45:25.907895922Z" level=info msg="StartContainer for \"df8799f2ee9f47667835f45f62a3ab0364e1996db6eacfb8a6312e880cb65aae\" returns successfully" Apr 17 23:45:25.988471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f-shm.mount: Deactivated successfully. Apr 17 23:45:25.988898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77-shm.mount: Deactivated successfully. Apr 17 23:45:25.989192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e-shm.mount: Deactivated successfully. Apr 17 23:45:25.989526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8-shm.mount: Deactivated successfully. Apr 17 23:45:25.990072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3-shm.mount: Deactivated successfully. Apr 17 23:45:26.719350 containerd[1454]: time="2026-04-17T23:45:26.717878561Z" level=info msg="StopPodSandbox for \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\"" Apr 17 23:45:26.795874 kubelet[2580]: I0417 23:45:26.795795 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pts4n" podStartSLOduration=4.962794807 podStartE2EDuration="24.795745963s" podCreationTimestamp="2026-04-17 23:45:02 +0000 UTC" firstStartedPulling="2026-04-17 23:45:02.844177094 +0000 UTC m=+21.741815501" lastFinishedPulling="2026-04-17 23:45:22.677128249 +0000 UTC m=+41.574766657" observedRunningTime="2026-04-17 23:45:26.752103238 +0000 UTC m=+45.649741673" watchObservedRunningTime="2026-04-17 23:45:26.795745963 +0000 UTC m=+45.693384396" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.796 [INFO][3843] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.797 [INFO][3843] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" iface="eth0" netns="/var/run/netns/cni-afd62784-2c4d-f727-b87e-65357e16e05e" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.799 [INFO][3843] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" iface="eth0" netns="/var/run/netns/cni-afd62784-2c4d-f727-b87e-65357e16e05e" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.799 [INFO][3843] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" iface="eth0" netns="/var/run/netns/cni-afd62784-2c4d-f727-b87e-65357e16e05e" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.799 [INFO][3843] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.799 [INFO][3843] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.833 [INFO][3850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" HandleID="k8s-pod-network.6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.833 [INFO][3850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.833 [INFO][3850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.842 [WARNING][3850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" HandleID="k8s-pod-network.6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.842 [INFO][3850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" HandleID="k8s-pod-network.6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.844 [INFO][3850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:26.849769 containerd[1454]: 2026-04-17 23:45:26.847 [INFO][3843] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:26.852780 containerd[1454]: time="2026-04-17T23:45:26.851138886Z" level=info msg="TearDown network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\" successfully" Apr 17 23:45:26.852780 containerd[1454]: time="2026-04-17T23:45:26.851209927Z" level=info msg="StopPodSandbox for \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\" returns successfully" Apr 17 23:45:26.855783 systemd[1]: run-netns-cni\x2dafd62784\x2d2c4d\x2df727\x2db87e\x2d65357e16e05e.mount: Deactivated successfully. Apr 17 23:45:26.931937 kubelet[2580]: I0417 23:45:26.931622 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlpd7\" (UniqueName: \"kubernetes.io/projected/8575dd81-0a24-4ece-99ca-0578256ac1d0-kube-api-access-zlpd7\") pod \"8575dd81-0a24-4ece-99ca-0578256ac1d0\" (UID: \"8575dd81-0a24-4ece-99ca-0578256ac1d0\") " Apr 17 23:45:26.932490 kubelet[2580]: I0417 23:45:26.932238 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8575dd81-0a24-4ece-99ca-0578256ac1d0-nginx-config\") pod \"8575dd81-0a24-4ece-99ca-0578256ac1d0\" (UID: \"8575dd81-0a24-4ece-99ca-0578256ac1d0\") " Apr 17 23:45:26.933033 kubelet[2580]: I0417 23:45:26.932806 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8575dd81-0a24-4ece-99ca-0578256ac1d0-whisker-ca-bundle\") pod \"8575dd81-0a24-4ece-99ca-0578256ac1d0\" (UID: \"8575dd81-0a24-4ece-99ca-0578256ac1d0\") " Apr 17 23:45:26.933033 kubelet[2580]: I0417 23:45:26.932906 2580 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8575dd81-0a24-4ece-99ca-0578256ac1d0-whisker-backend-key-pair\") pod \"8575dd81-0a24-4ece-99ca-0578256ac1d0\" (UID: \"8575dd81-0a24-4ece-99ca-0578256ac1d0\") " Apr 17 23:45:26.933399 kubelet[2580]: I0417 23:45:26.932916 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8575dd81-0a24-4ece-99ca-0578256ac1d0-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "8575dd81-0a24-4ece-99ca-0578256ac1d0" (UID: "8575dd81-0a24-4ece-99ca-0578256ac1d0"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:45:26.934213 kubelet[2580]: I0417 23:45:26.934168 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8575dd81-0a24-4ece-99ca-0578256ac1d0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8575dd81-0a24-4ece-99ca-0578256ac1d0" (UID: "8575dd81-0a24-4ece-99ca-0578256ac1d0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:45:26.942364 kubelet[2580]: I0417 23:45:26.941607 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8575dd81-0a24-4ece-99ca-0578256ac1d0-kube-api-access-zlpd7" (OuterVolumeSpecName: "kube-api-access-zlpd7") pod "8575dd81-0a24-4ece-99ca-0578256ac1d0" (UID: "8575dd81-0a24-4ece-99ca-0578256ac1d0"). InnerVolumeSpecName "kube-api-access-zlpd7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:45:26.943972 kubelet[2580]: I0417 23:45:26.943926 2580 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8575dd81-0a24-4ece-99ca-0578256ac1d0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8575dd81-0a24-4ece-99ca-0578256ac1d0" (UID: "8575dd81-0a24-4ece-99ca-0578256ac1d0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:45:26.947411 systemd[1]: var-lib-kubelet-pods-8575dd81\x2d0a24\x2d4ece\x2d99ca\x2d0578256ac1d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzlpd7.mount: Deactivated successfully. Apr 17 23:45:26.947584 systemd[1]: var-lib-kubelet-pods-8575dd81\x2d0a24\x2d4ece\x2d99ca\x2d0578256ac1d0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:45:27.034272 kubelet[2580]: I0417 23:45:27.034112 2580 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zlpd7\" (UniqueName: \"kubernetes.io/projected/8575dd81-0a24-4ece-99ca-0578256ac1d0-kube-api-access-zlpd7\") on node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" DevicePath \"\"" Apr 17 23:45:27.034272 kubelet[2580]: I0417 23:45:27.034166 2580 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8575dd81-0a24-4ece-99ca-0578256ac1d0-nginx-config\") on node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" DevicePath \"\"" Apr 17 23:45:27.034272 kubelet[2580]: I0417 23:45:27.034183 2580 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8575dd81-0a24-4ece-99ca-0578256ac1d0-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" DevicePath \"\"" Apr 17 23:45:27.034272 kubelet[2580]: I0417 23:45:27.034199 2580 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8575dd81-0a24-4ece-99ca-0578256ac1d0-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18\" DevicePath \"\"" Apr 17 23:45:27.290716 systemd[1]: Removed slice kubepods-besteffort-pod8575dd81_0a24_4ece_99ca_0578256ac1d0.slice - libcontainer container kubepods-besteffort-pod8575dd81_0a24_4ece_99ca_0578256ac1d0.slice. Apr 17 23:45:27.838302 systemd[1]: Created slice kubepods-besteffort-podbe0ebb3e_beae_43a2_b389_cfbfaee91e3a.slice - libcontainer container kubepods-besteffort-podbe0ebb3e_beae_43a2_b389_cfbfaee91e3a.slice. Apr 17 23:45:27.944799 kubelet[2580]: I0417 23:45:27.944717 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/be0ebb3e-beae-43a2-b389-cfbfaee91e3a-nginx-config\") pod \"whisker-58959cdf88-jnz64\" (UID: \"be0ebb3e-beae-43a2-b389-cfbfaee91e3a\") " pod="calico-system/whisker-58959cdf88-jnz64" Apr 17 23:45:27.946367 kubelet[2580]: I0417 23:45:27.944821 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be0ebb3e-beae-43a2-b389-cfbfaee91e3a-whisker-ca-bundle\") pod \"whisker-58959cdf88-jnz64\" (UID: \"be0ebb3e-beae-43a2-b389-cfbfaee91e3a\") " pod="calico-system/whisker-58959cdf88-jnz64" Apr 17 23:45:27.946367 kubelet[2580]: I0417 23:45:27.944867 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be0ebb3e-beae-43a2-b389-cfbfaee91e3a-whisker-backend-key-pair\") pod \"whisker-58959cdf88-jnz64\" (UID: \"be0ebb3e-beae-43a2-b389-cfbfaee91e3a\") " pod="calico-system/whisker-58959cdf88-jnz64" Apr 17 23:45:27.946367 kubelet[2580]: I0417 23:45:27.944904 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbrdk\" (UniqueName: \"kubernetes.io/projected/be0ebb3e-beae-43a2-b389-cfbfaee91e3a-kube-api-access-wbrdk\") pod \"whisker-58959cdf88-jnz64\" (UID: \"be0ebb3e-beae-43a2-b389-cfbfaee91e3a\") " pod="calico-system/whisker-58959cdf88-jnz64" Apr 17 23:45:28.021812 kernel: calico-node[3902]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:45:28.149652 containerd[1454]: time="2026-04-17T23:45:28.149104514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58959cdf88-jnz64,Uid:be0ebb3e-beae-43a2-b389-cfbfaee91e3a,Namespace:calico-system,Attempt:0,}" Apr 17 23:45:28.412197 systemd-networkd[1365]: calid6c33d0b676: Link UP Apr 17 23:45:28.412591 systemd-networkd[1365]: calid6c33d0b676: Gained carrier Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.247 [INFO][3987] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0 whisker-58959cdf88- calico-system be0ebb3e-beae-43a2-b389-cfbfaee91e3a 958 0 2026-04-17 23:45:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58959cdf88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18 whisker-58959cdf88-jnz64 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid6c33d0b676 [] [] }} ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Namespace="calico-system" Pod="whisker-58959cdf88-jnz64" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.248 [INFO][3987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Namespace="calico-system" Pod="whisker-58959cdf88-jnz64" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.313 [INFO][3998] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" HandleID="k8s-pod-network.6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.325 [INFO][3998] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" HandleID="k8s-pod-network.6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fddc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", "pod":"whisker-58959cdf88-jnz64", "timestamp":"2026-04-17 23:45:28.313014007 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000295600)} Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.326 [INFO][3998] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.326 [INFO][3998] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.326 [INFO][3998] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.330 [INFO][3998] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.337 [INFO][3998] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.346 [INFO][3998] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.351 [INFO][3998] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.357 [INFO][3998] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.358 [INFO][3998] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.363 [INFO][3998] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.376 [INFO][3998] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.392 [INFO][3998] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.1/26] block=192.168.29.0/26 handle="k8s-pod-network.6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.392 [INFO][3998] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.1/26] handle="k8s-pod-network.6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.392 [INFO][3998] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:28.443818 containerd[1454]: 2026-04-17 23:45:28.393 [INFO][3998] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.1/26] IPv6=[] ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" HandleID="k8s-pod-network.6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" Apr 17 23:45:28.447174 containerd[1454]: 2026-04-17 23:45:28.395 [INFO][3987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Namespace="calico-system" Pod="whisker-58959cdf88-jnz64" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0", GenerateName:"whisker-58959cdf88-", Namespace:"calico-system", SelfLink:"", UID:"be0ebb3e-beae-43a2-b389-cfbfaee91e3a", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58959cdf88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"", Pod:"whisker-58959cdf88-jnz64", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid6c33d0b676", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:28.447174 containerd[1454]: 2026-04-17 23:45:28.396 [INFO][3987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.1/32] ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Namespace="calico-system" Pod="whisker-58959cdf88-jnz64" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" Apr 17 23:45:28.447174 containerd[1454]: 2026-04-17 23:45:28.396 [INFO][3987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6c33d0b676 ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Namespace="calico-system" Pod="whisker-58959cdf88-jnz64" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" Apr 17 23:45:28.447174 containerd[1454]: 2026-04-17 23:45:28.412 [INFO][3987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Namespace="calico-system" Pod="whisker-58959cdf88-jnz64" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" Apr 17 23:45:28.447174 containerd[1454]: 2026-04-17 23:45:28.415 [INFO][3987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Namespace="calico-system" Pod="whisker-58959cdf88-jnz64" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0", GenerateName:"whisker-58959cdf88-", Namespace:"calico-system", SelfLink:"", UID:"be0ebb3e-beae-43a2-b389-cfbfaee91e3a", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58959cdf88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb", Pod:"whisker-58959cdf88-jnz64", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid6c33d0b676", MAC:"7a:8c:49:dd:58:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:28.447174 containerd[1454]: 2026-04-17 23:45:28.433 [INFO][3987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb" Namespace="calico-system" Pod="whisker-58959cdf88-jnz64" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--58959cdf88--jnz64-eth0" Apr 17 23:45:28.497905 containerd[1454]: time="2026-04-17T23:45:28.496790092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:28.497905 containerd[1454]: time="2026-04-17T23:45:28.496898606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:28.497905 containerd[1454]: time="2026-04-17T23:45:28.497252113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:28.499371 containerd[1454]: time="2026-04-17T23:45:28.498026530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:28.552070 systemd[1]: Started cri-containerd-6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb.scope - libcontainer container 6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb. Apr 17 23:45:28.629987 containerd[1454]: time="2026-04-17T23:45:28.629931998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58959cdf88-jnz64,Uid:be0ebb3e-beae-43a2-b389-cfbfaee91e3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb\"" Apr 17 23:45:28.632294 containerd[1454]: time="2026-04-17T23:45:28.632241140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:45:28.865303 systemd-networkd[1365]: vxlan.calico: Link UP Apr 17 23:45:28.865317 systemd-networkd[1365]: vxlan.calico: Gained carrier Apr 17 23:45:29.287830 kubelet[2580]: I0417 23:45:29.287404 2580 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8575dd81-0a24-4ece-99ca-0578256ac1d0" path="/var/lib/kubelet/pods/8575dd81-0a24-4ece-99ca-0578256ac1d0/volumes" Apr 17 23:45:29.760475 containerd[1454]: time="2026-04-17T23:45:29.760373223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:29.762311 containerd[1454]: time="2026-04-17T23:45:29.762062306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:45:29.764980 containerd[1454]: time="2026-04-17T23:45:29.764865223Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:29.770644 containerd[1454]: time="2026-04-17T23:45:29.769846449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:29.773354 containerd[1454]: time="2026-04-17T23:45:29.773289271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.140991126s" Apr 17 23:45:29.773629 containerd[1454]: time="2026-04-17T23:45:29.773497766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:45:29.784204 containerd[1454]: time="2026-04-17T23:45:29.784141029Z" level=info msg="CreateContainer within sandbox \"6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:45:29.804355 containerd[1454]: time="2026-04-17T23:45:29.804288822Z" level=info msg="CreateContainer within sandbox \"6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8613f45baffde6d59855639eed52a7ed10374270759fae4a5727d87a1778451b\"" Apr 17 23:45:29.807810 containerd[1454]: time="2026-04-17T23:45:29.805446557Z" level=info msg="StartContainer for \"8613f45baffde6d59855639eed52a7ed10374270759fae4a5727d87a1778451b\"" Apr 17 23:45:29.867133 systemd[1]: Started cri-containerd-8613f45baffde6d59855639eed52a7ed10374270759fae4a5727d87a1778451b.scope - libcontainer container 8613f45baffde6d59855639eed52a7ed10374270759fae4a5727d87a1778451b. Apr 17 23:45:29.908688 systemd-networkd[1365]: calid6c33d0b676: Gained IPv6LL Apr 17 23:45:29.942107 containerd[1454]: time="2026-04-17T23:45:29.941909055Z" level=info msg="StartContainer for \"8613f45baffde6d59855639eed52a7ed10374270759fae4a5727d87a1778451b\" returns successfully" Apr 17 23:45:29.944827 containerd[1454]: time="2026-04-17T23:45:29.944726774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:45:30.058550 systemd[1]: run-containerd-runc-k8s.io-8613f45baffde6d59855639eed52a7ed10374270759fae4a5727d87a1778451b-runc.Nf88fR.mount: Deactivated successfully. Apr 17 23:45:30.100907 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Apr 17 23:45:31.380519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1049517101.mount: Deactivated successfully. Apr 17 23:45:31.410341 containerd[1454]: time="2026-04-17T23:45:31.410277814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:31.412406 containerd[1454]: time="2026-04-17T23:45:31.412336673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:45:31.414516 containerd[1454]: time="2026-04-17T23:45:31.414466671Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:31.420380 containerd[1454]: time="2026-04-17T23:45:31.420303629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:31.422714 containerd[1454]: time="2026-04-17T23:45:31.422645854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.477822647s" Apr 17 23:45:31.422714 containerd[1454]: time="2026-04-17T23:45:31.422714232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:45:31.430226 containerd[1454]: time="2026-04-17T23:45:31.430159006Z" level=info msg="CreateContainer within sandbox \"6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:45:31.454428 containerd[1454]: time="2026-04-17T23:45:31.454360947Z" level=info msg="CreateContainer within sandbox \"6ca4ac0a9f8f76acbb765ebb59fc92b327c801364f5d2ba28fc40965f0dc1ccb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"eb8cdb58d7f96768a2287f0b5f21d7ddff31af41c5eb4b090db5fce218958bea\"" Apr 17 23:45:31.455876 containerd[1454]: time="2026-04-17T23:45:31.455569378Z" level=info msg="StartContainer for \"eb8cdb58d7f96768a2287f0b5f21d7ddff31af41c5eb4b090db5fce218958bea\"" Apr 17 23:45:31.507056 systemd[1]: Started cri-containerd-eb8cdb58d7f96768a2287f0b5f21d7ddff31af41c5eb4b090db5fce218958bea.scope - libcontainer container eb8cdb58d7f96768a2287f0b5f21d7ddff31af41c5eb4b090db5fce218958bea. Apr 17 23:45:31.582331 containerd[1454]: time="2026-04-17T23:45:31.582236119Z" level=info msg="StartContainer for \"eb8cdb58d7f96768a2287f0b5f21d7ddff31af41c5eb4b090db5fce218958bea\" returns successfully" Apr 17 23:45:31.769354 kubelet[2580]: I0417 23:45:31.767008 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58959cdf88-jnz64" podStartSLOduration=1.975165054 podStartE2EDuration="4.766981256s" podCreationTimestamp="2026-04-17 23:45:27 +0000 UTC" firstStartedPulling="2026-04-17 23:45:28.631930202 +0000 UTC m=+47.529568612" lastFinishedPulling="2026-04-17 23:45:31.423746406 +0000 UTC m=+50.321384814" observedRunningTime="2026-04-17 23:45:31.764492292 +0000 UTC m=+50.662130721" watchObservedRunningTime="2026-04-17 23:45:31.766981256 +0000 UTC m=+50.664619690" Apr 17 23:45:32.748911 ntpd[1423]: Listen normally on 8 vxlan.calico 192.168.29.0:123 Apr 17 23:45:32.749052 ntpd[1423]: Listen normally on 9 calid6c33d0b676 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:45:32.749479 ntpd[1423]: 17 Apr 23:45:32 ntpd[1423]: Listen normally on 8 vxlan.calico 192.168.29.0:123 Apr 17 23:45:32.749479 ntpd[1423]: 17 Apr 23:45:32 ntpd[1423]: Listen normally on 9 calid6c33d0b676 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:45:32.749479 ntpd[1423]: 17 Apr 23:45:32 ntpd[1423]: Listen normally on 10 vxlan.calico [fe80::6485:aff:fece:4ce1%5]:123 Apr 17 23:45:32.749143 ntpd[1423]: Listen normally on 10 vxlan.calico [fe80::6485:aff:fece:4ce1%5]:123 Apr 17 23:45:38.283363 containerd[1454]: time="2026-04-17T23:45:38.283243924Z" level=info msg="StopPodSandbox for \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\"" Apr 17 23:45:38.285029 containerd[1454]: time="2026-04-17T23:45:38.283278403Z" level=info msg="StopPodSandbox for \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\"" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.378 [INFO][4260] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.379 [INFO][4260] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" iface="eth0" netns="/var/run/netns/cni-d5d3b0d8-132b-7768-983a-45e6796fbcc3" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.379 [INFO][4260] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" iface="eth0" netns="/var/run/netns/cni-d5d3b0d8-132b-7768-983a-45e6796fbcc3" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.379 [INFO][4260] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" iface="eth0" netns="/var/run/netns/cni-d5d3b0d8-132b-7768-983a-45e6796fbcc3" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.380 [INFO][4260] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.380 [INFO][4260] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.433 [INFO][4277] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" HandleID="k8s-pod-network.4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.433 [INFO][4277] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.433 [INFO][4277] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.447 [WARNING][4277] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" HandleID="k8s-pod-network.4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.447 [INFO][4277] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" HandleID="k8s-pod-network.4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.449 [INFO][4277] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:38.454156 containerd[1454]: 2026-04-17 23:45:38.451 [INFO][4260] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:38.455998 containerd[1454]: time="2026-04-17T23:45:38.454649574Z" level=info msg="TearDown network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\" successfully" Apr 17 23:45:38.455998 containerd[1454]: time="2026-04-17T23:45:38.454692741Z" level=info msg="StopPodSandbox for \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\" returns successfully" Apr 17 23:45:38.461097 containerd[1454]: time="2026-04-17T23:45:38.461053084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-57wrg,Uid:31cae8c7-c29a-40c1-b51e-df324d3ffd96,Namespace:calico-system,Attempt:1,}" Apr 17 23:45:38.464537 systemd[1]: run-netns-cni\x2dd5d3b0d8\x2d132b\x2d7768\x2d983a\x2d45e6796fbcc3.mount: Deactivated successfully. Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.398 [INFO][4265] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.399 [INFO][4265] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" iface="eth0" netns="/var/run/netns/cni-876a79aa-5c7b-3f85-5d51-2e60222d8354" Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.401 [INFO][4265] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" iface="eth0" netns="/var/run/netns/cni-876a79aa-5c7b-3f85-5d51-2e60222d8354" Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.403 [INFO][4265] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" iface="eth0" netns="/var/run/netns/cni-876a79aa-5c7b-3f85-5d51-2e60222d8354" Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.403 [INFO][4265] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.403 [INFO][4265] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.451 [INFO][4283] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" HandleID="k8s-pod-network.95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.451 [INFO][4283] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.452 [INFO][4283] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.472 [WARNING][4283] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" HandleID="k8s-pod-network.95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.472 [INFO][4283] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" HandleID="k8s-pod-network.95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.475 [INFO][4283] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:38.479295 containerd[1454]: 2026-04-17 23:45:38.477 [INFO][4265] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:38.483191 containerd[1454]: time="2026-04-17T23:45:38.481863590Z" level=info msg="TearDown network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\" successfully" Apr 17 23:45:38.483191 containerd[1454]: time="2026-04-17T23:45:38.481904740Z" level=info msg="StopPodSandbox for \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\" returns successfully" Apr 17 23:45:38.484031 containerd[1454]: time="2026-04-17T23:45:38.483965010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76dd74c988-x7l4x,Uid:f7b55c10-3440-4ee3-957d-61f422c06f09,Namespace:calico-system,Attempt:1,}" Apr 17 23:45:38.499672 systemd[1]: run-netns-cni\x2d876a79aa\x2d5c7b\x2d3f85\x2d5d51\x2d2e60222d8354.mount: Deactivated successfully. Apr 17 23:45:38.707979 systemd-networkd[1365]: calia45be848ef7: Link UP Apr 17 23:45:38.710256 systemd-networkd[1365]: calia45be848ef7: Gained carrier Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.590 [INFO][4296] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0 calico-kube-controllers-76dd74c988- calico-system f7b55c10-3440-4ee3-957d-61f422c06f09 1003 0 2026-04-17 23:45:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76dd74c988 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18 calico-kube-controllers-76dd74c988-x7l4x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia45be848ef7 [] [] }} ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Namespace="calico-system" Pod="calico-kube-controllers-76dd74c988-x7l4x" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.591 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Namespace="calico-system" Pod="calico-kube-controllers-76dd74c988-x7l4x" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.648 [INFO][4320] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" HandleID="k8s-pod-network.5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.659 [INFO][4320] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" HandleID="k8s-pod-network.5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd880), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", "pod":"calico-kube-controllers-76dd74c988-x7l4x", "timestamp":"2026-04-17 23:45:38.647997138 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003baf20)} Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.659 [INFO][4320] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.660 [INFO][4320] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.660 [INFO][4320] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.663 [INFO][4320] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.669 [INFO][4320] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.676 [INFO][4320] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.678 [INFO][4320] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.681 [INFO][4320] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.681 [INFO][4320] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.683 [INFO][4320] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3 Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.692 [INFO][4320] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.700 [INFO][4320] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.2/26] block=192.168.29.0/26 handle="k8s-pod-network.5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.700 [INFO][4320] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.2/26] handle="k8s-pod-network.5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.700 [INFO][4320] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:38.738092 containerd[1454]: 2026-04-17 23:45:38.700 [INFO][4320] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.2/26] IPv6=[] ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" HandleID="k8s-pod-network.5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.741374 containerd[1454]: 2026-04-17 23:45:38.703 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Namespace="calico-system" Pod="calico-kube-controllers-76dd74c988-x7l4x" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0", GenerateName:"calico-kube-controllers-76dd74c988-", Namespace:"calico-system", SelfLink:"", UID:"f7b55c10-3440-4ee3-957d-61f422c06f09", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76dd74c988", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"", Pod:"calico-kube-controllers-76dd74c988-x7l4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia45be848ef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:38.741374 containerd[1454]: 2026-04-17 23:45:38.704 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.2/32] ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Namespace="calico-system" Pod="calico-kube-controllers-76dd74c988-x7l4x" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.741374 containerd[1454]: 2026-04-17 23:45:38.704 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia45be848ef7 ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Namespace="calico-system" Pod="calico-kube-controllers-76dd74c988-x7l4x" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.741374 containerd[1454]: 2026-04-17 23:45:38.710 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Namespace="calico-system" Pod="calico-kube-controllers-76dd74c988-x7l4x" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.741374 containerd[1454]: 2026-04-17 23:45:38.711 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Namespace="calico-system" Pod="calico-kube-controllers-76dd74c988-x7l4x" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0", GenerateName:"calico-kube-controllers-76dd74c988-", Namespace:"calico-system", SelfLink:"", UID:"f7b55c10-3440-4ee3-957d-61f422c06f09", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76dd74c988", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3", Pod:"calico-kube-controllers-76dd74c988-x7l4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia45be848ef7", MAC:"12:c9:4e:7d:7e:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:38.741374 containerd[1454]: 2026-04-17 23:45:38.734 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3" Namespace="calico-system" Pod="calico-kube-controllers-76dd74c988-x7l4x" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:38.800348 containerd[1454]: time="2026-04-17T23:45:38.799886703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:38.800348 containerd[1454]: time="2026-04-17T23:45:38.799980037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:38.800348 containerd[1454]: time="2026-04-17T23:45:38.800029578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:38.800348 containerd[1454]: time="2026-04-17T23:45:38.800170234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:38.852278 systemd[1]: Started cri-containerd-5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3.scope - libcontainer container 5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3. Apr 17 23:45:38.854193 systemd-networkd[1365]: cali74d2dc65d57: Link UP Apr 17 23:45:38.856999 systemd-networkd[1365]: cali74d2dc65d57: Gained carrier Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.575 [INFO][4291] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0 goldmane-5b85766d88- calico-system 31cae8c7-c29a-40c1-b51e-df324d3ffd96 1002 0 2026-04-17 23:45:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18 goldmane-5b85766d88-57wrg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali74d2dc65d57 [] [] }} ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Namespace="calico-system" Pod="goldmane-5b85766d88-57wrg" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.576 [INFO][4291] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Namespace="calico-system" Pod="goldmane-5b85766d88-57wrg" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.640 [INFO][4315] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" HandleID="k8s-pod-network.b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.659 [INFO][4315] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" HandleID="k8s-pod-network.b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", "pod":"goldmane-5b85766d88-57wrg", "timestamp":"2026-04-17 23:45:38.640410504 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000422b00)} Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.659 [INFO][4315] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.700 [INFO][4315] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.700 [INFO][4315] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.766 [INFO][4315] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.779 [INFO][4315] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.788 [INFO][4315] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.792 [INFO][4315] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.798 [INFO][4315] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.799 [INFO][4315] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.810 [INFO][4315] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756 Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.821 [INFO][4315] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.834 [INFO][4315] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.3/26] block=192.168.29.0/26 handle="k8s-pod-network.b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.834 [INFO][4315] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.3/26] handle="k8s-pod-network.b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.835 [INFO][4315] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:38.892206 containerd[1454]: 2026-04-17 23:45:38.835 [INFO][4315] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.3/26] IPv6=[] ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" HandleID="k8s-pod-network.b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.893520 containerd[1454]: 2026-04-17 23:45:38.844 [INFO][4291] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Namespace="calico-system" Pod="goldmane-5b85766d88-57wrg" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"31cae8c7-c29a-40c1-b51e-df324d3ffd96", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"", Pod:"goldmane-5b85766d88-57wrg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali74d2dc65d57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:38.893520 containerd[1454]: 2026-04-17 23:45:38.844 [INFO][4291] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.3/32] ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Namespace="calico-system" Pod="goldmane-5b85766d88-57wrg" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.893520 containerd[1454]: 2026-04-17 23:45:38.844 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74d2dc65d57 ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Namespace="calico-system" Pod="goldmane-5b85766d88-57wrg" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.893520 containerd[1454]: 2026-04-17 23:45:38.858 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Namespace="calico-system" Pod="goldmane-5b85766d88-57wrg" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.893520 containerd[1454]: 2026-04-17 23:45:38.859 [INFO][4291] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Namespace="calico-system" Pod="goldmane-5b85766d88-57wrg" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"31cae8c7-c29a-40c1-b51e-df324d3ffd96", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756", Pod:"goldmane-5b85766d88-57wrg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali74d2dc65d57", MAC:"c6:8e:99:89:1a:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:38.893520 containerd[1454]: 2026-04-17 23:45:38.889 [INFO][4291] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756" Namespace="calico-system" Pod="goldmane-5b85766d88-57wrg" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:38.938114 containerd[1454]: time="2026-04-17T23:45:38.938000785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:38.938660 containerd[1454]: time="2026-04-17T23:45:38.938344779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:38.938660 containerd[1454]: time="2026-04-17T23:45:38.938439447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:38.939033 containerd[1454]: time="2026-04-17T23:45:38.938633420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:38.973516 systemd[1]: Started cri-containerd-b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756.scope - libcontainer container b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756. Apr 17 23:45:39.043020 containerd[1454]: time="2026-04-17T23:45:39.042967316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76dd74c988-x7l4x,Uid:f7b55c10-3440-4ee3-957d-61f422c06f09,Namespace:calico-system,Attempt:1,} returns sandbox id \"5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3\"" Apr 17 23:45:39.046843 containerd[1454]: time="2026-04-17T23:45:39.045988335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:45:39.106190 containerd[1454]: time="2026-04-17T23:45:39.106131845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-57wrg,Uid:31cae8c7-c29a-40c1-b51e-df324d3ffd96,Namespace:calico-system,Attempt:1,} returns sandbox id \"b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756\"" Apr 17 23:45:39.288024 containerd[1454]: time="2026-04-17T23:45:39.287249383Z" level=info msg="StopPodSandbox for \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\"" Apr 17 23:45:39.288024 containerd[1454]: time="2026-04-17T23:45:39.287320390Z" level=info msg="StopPodSandbox for \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\"" Apr 17 23:45:39.292587 containerd[1454]: time="2026-04-17T23:45:39.287267271Z" level=info msg="StopPodSandbox for \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\"" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.506 [INFO][4479] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.506 [INFO][4479] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" iface="eth0" netns="/var/run/netns/cni-f540ab98-7503-682a-5292-1eac50d5dcab" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.507 [INFO][4479] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" iface="eth0" netns="/var/run/netns/cni-f540ab98-7503-682a-5292-1eac50d5dcab" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.507 [INFO][4479] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" iface="eth0" netns="/var/run/netns/cni-f540ab98-7503-682a-5292-1eac50d5dcab" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.507 [INFO][4479] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.507 [INFO][4479] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.574 [INFO][4511] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" HandleID="k8s-pod-network.feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.575 [INFO][4511] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.575 [INFO][4511] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.590 [WARNING][4511] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" HandleID="k8s-pod-network.feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.590 [INFO][4511] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" HandleID="k8s-pod-network.feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.593 [INFO][4511] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:39.599381 containerd[1454]: 2026-04-17 23:45:39.595 [INFO][4479] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:39.602400 containerd[1454]: time="2026-04-17T23:45:39.602147244Z" level=info msg="TearDown network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\" successfully" Apr 17 23:45:39.602821 containerd[1454]: time="2026-04-17T23:45:39.602673690Z" level=info msg="StopPodSandbox for \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\" returns successfully" Apr 17 23:45:39.606135 containerd[1454]: time="2026-04-17T23:45:39.606091523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dt4x5,Uid:061fc661-7623-4bb3-8cee-51fca4a6f0d4,Namespace:kube-system,Attempt:1,}" Apr 17 23:45:39.614668 systemd[1]: run-netns-cni\x2df540ab98\x2d7503\x2d682a\x2d5292\x2d1eac50d5dcab.mount: Deactivated successfully. Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.473 [INFO][4469] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.473 [INFO][4469] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" iface="eth0" netns="/var/run/netns/cni-01f996f8-3f69-2eda-6b91-c82de27cf33d" Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.474 [INFO][4469] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" iface="eth0" netns="/var/run/netns/cni-01f996f8-3f69-2eda-6b91-c82de27cf33d" Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.476 [INFO][4469] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" iface="eth0" netns="/var/run/netns/cni-01f996f8-3f69-2eda-6b91-c82de27cf33d" Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.476 [INFO][4469] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.476 [INFO][4469] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.580 [INFO][4504] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" HandleID="k8s-pod-network.83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.580 [INFO][4504] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.592 [INFO][4504] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.610 [WARNING][4504] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" HandleID="k8s-pod-network.83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.611 [INFO][4504] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" HandleID="k8s-pod-network.83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.615 [INFO][4504] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:39.624838 containerd[1454]: 2026-04-17 23:45:39.621 [INFO][4469] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:39.624838 containerd[1454]: time="2026-04-17T23:45:39.624120508Z" level=info msg="TearDown network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\" successfully" Apr 17 23:45:39.624838 containerd[1454]: time="2026-04-17T23:45:39.624155270Z" level=info msg="StopPodSandbox for \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\" returns successfully" Apr 17 23:45:39.626415 containerd[1454]: time="2026-04-17T23:45:39.625709301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fd9bf59-fr8st,Uid:e5226537-bc0c-470a-98d4-4745df18b74f,Namespace:calico-system,Attempt:1,}" Apr 17 23:45:39.632068 systemd[1]: run-netns-cni\x2d01f996f8\x2d3f69\x2d2eda\x2d6b91\x2dc82de27cf33d.mount: Deactivated successfully. Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.539 [INFO][4480] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.541 [INFO][4480] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" iface="eth0" netns="/var/run/netns/cni-8f523b5f-a1c9-1d58-c0dc-a24a8eb62a21" Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.544 [INFO][4480] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" iface="eth0" netns="/var/run/netns/cni-8f523b5f-a1c9-1d58-c0dc-a24a8eb62a21" Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.544 [INFO][4480] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" iface="eth0" netns="/var/run/netns/cni-8f523b5f-a1c9-1d58-c0dc-a24a8eb62a21" Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.544 [INFO][4480] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.544 [INFO][4480] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.643 [INFO][4516] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" HandleID="k8s-pod-network.27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.643 [INFO][4516] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.643 [INFO][4516] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.656 [WARNING][4516] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" HandleID="k8s-pod-network.27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.657 [INFO][4516] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" HandleID="k8s-pod-network.27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.664 [INFO][4516] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:39.680018 containerd[1454]: 2026-04-17 23:45:39.673 [INFO][4480] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:39.680827 containerd[1454]: time="2026-04-17T23:45:39.680302554Z" level=info msg="TearDown network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\" successfully" Apr 17 23:45:39.680827 containerd[1454]: time="2026-04-17T23:45:39.680341003Z" level=info msg="StopPodSandbox for \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\" returns successfully" Apr 17 23:45:39.682201 containerd[1454]: time="2026-04-17T23:45:39.681795231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q4qpd,Uid:175da1ed-b0db-4d24-bad8-f8db619e26a8,Namespace:calico-system,Attempt:1,}" Apr 17 23:45:39.828234 systemd-networkd[1365]: calia45be848ef7: Gained IPv6LL Apr 17 23:45:39.992317 systemd-networkd[1365]: cali975334b4d6f: Link UP Apr 17 23:45:39.993774 systemd-networkd[1365]: cali975334b4d6f: Gained carrier Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.778 [INFO][4528] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0 coredns-674b8bbfcf- kube-system 061fc661-7623-4bb3-8cee-51fca4a6f0d4 1018 0 2026-04-17 23:44:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18 coredns-674b8bbfcf-dt4x5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali975334b4d6f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Namespace="kube-system" Pod="coredns-674b8bbfcf-dt4x5" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.778 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Namespace="kube-system" Pod="coredns-674b8bbfcf-dt4x5" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.896 [INFO][4561] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" HandleID="k8s-pod-network.e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.918 [INFO][4561] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" HandleID="k8s-pod-network.e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d6280), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", "pod":"coredns-674b8bbfcf-dt4x5", "timestamp":"2026-04-17 23:45:39.896320767 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000536000)} Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.918 [INFO][4561] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.918 [INFO][4561] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.918 [INFO][4561] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.924 [INFO][4561] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.938 [INFO][4561] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.955 [INFO][4561] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.958 [INFO][4561] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.962 [INFO][4561] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.963 [INFO][4561] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.967 [INFO][4561] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.974 [INFO][4561] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.982 [INFO][4561] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.4/26] block=192.168.29.0/26 handle="k8s-pod-network.e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.982 [INFO][4561] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.4/26] handle="k8s-pod-network.e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.982 [INFO][4561] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:40.016252 containerd[1454]: 2026-04-17 23:45:39.982 [INFO][4561] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.4/26] IPv6=[] ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" HandleID="k8s-pod-network.e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:40.017497 containerd[1454]: 2026-04-17 23:45:39.986 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Namespace="kube-system" Pod="coredns-674b8bbfcf-dt4x5" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"061fc661-7623-4bb3-8cee-51fca4a6f0d4", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"", Pod:"coredns-674b8bbfcf-dt4x5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali975334b4d6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:40.017497 containerd[1454]: 2026-04-17 23:45:39.986 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.4/32] ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Namespace="kube-system" Pod="coredns-674b8bbfcf-dt4x5" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:40.017497 containerd[1454]: 2026-04-17 23:45:39.986 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali975334b4d6f ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Namespace="kube-system" Pod="coredns-674b8bbfcf-dt4x5" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:40.017497 containerd[1454]: 2026-04-17 23:45:39.993 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Namespace="kube-system" Pod="coredns-674b8bbfcf-dt4x5" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:40.017497 containerd[1454]: 2026-04-17 23:45:39.994 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Namespace="kube-system" Pod="coredns-674b8bbfcf-dt4x5" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"061fc661-7623-4bb3-8cee-51fca4a6f0d4", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e", Pod:"coredns-674b8bbfcf-dt4x5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali975334b4d6f", MAC:"0e:10:66:4f:0d:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:40.017497 containerd[1454]: 2026-04-17 23:45:40.011 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e" Namespace="kube-system" Pod="coredns-674b8bbfcf-dt4x5" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:40.106974 containerd[1454]: time="2026-04-17T23:45:40.090939408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:40.106974 containerd[1454]: time="2026-04-17T23:45:40.091687237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:40.106974 containerd[1454]: time="2026-04-17T23:45:40.094334886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:40.106974 containerd[1454]: time="2026-04-17T23:45:40.094496223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:40.123404 systemd-networkd[1365]: cali01106e0c217: Link UP Apr 17 23:45:40.126547 systemd-networkd[1365]: cali01106e0c217: Gained carrier Apr 17 23:45:40.166101 systemd[1]: Started cri-containerd-e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e.scope - libcontainer container e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e. Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:39.815 [INFO][4537] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0 calico-apiserver-64fd9bf59- calico-system e5226537-bc0c-470a-98d4-4745df18b74f 1017 0 2026-04-17 23:45:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64fd9bf59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18 calico-apiserver-64fd9bf59-fr8st eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali01106e0c217 [] [] }} ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-fr8st" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:39.815 [INFO][4537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-fr8st" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:39.943 [INFO][4568] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" HandleID="k8s-pod-network.21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:39.955 [INFO][4568] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" HandleID="k8s-pod-network.21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cdc30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", "pod":"calico-apiserver-64fd9bf59-fr8st", "timestamp":"2026-04-17 23:45:39.943028716 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000192000)} Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:39.955 [INFO][4568] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:39.983 [INFO][4568] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:39.983 [INFO][4568] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.026 [INFO][4568] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.045 [INFO][4568] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.059 [INFO][4568] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.064 [INFO][4568] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.071 [INFO][4568] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.072 [INFO][4568] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.077 [INFO][4568] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1 Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.089 [INFO][4568] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.109 [INFO][4568] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.5/26] block=192.168.29.0/26 handle="k8s-pod-network.21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.109 [INFO][4568] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.5/26] handle="k8s-pod-network.21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.109 [INFO][4568] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:40.184641 containerd[1454]: 2026-04-17 23:45:40.109 [INFO][4568] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.5/26] IPv6=[] ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" HandleID="k8s-pod-network.21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:40.189042 containerd[1454]: 2026-04-17 23:45:40.114 [INFO][4537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-fr8st" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0", GenerateName:"calico-apiserver-64fd9bf59-", Namespace:"calico-system", SelfLink:"", UID:"e5226537-bc0c-470a-98d4-4745df18b74f", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fd9bf59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"", Pod:"calico-apiserver-64fd9bf59-fr8st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali01106e0c217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:40.189042 containerd[1454]: 2026-04-17 23:45:40.114 [INFO][4537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.5/32] ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-fr8st" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:40.189042 containerd[1454]: 2026-04-17 23:45:40.114 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01106e0c217 ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-fr8st" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:40.189042 containerd[1454]: 2026-04-17 23:45:40.128 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-fr8st" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:40.189042 containerd[1454]: 2026-04-17 23:45:40.139 [INFO][4537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-fr8st" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0", GenerateName:"calico-apiserver-64fd9bf59-", Namespace:"calico-system", SelfLink:"", UID:"e5226537-bc0c-470a-98d4-4745df18b74f", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fd9bf59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1", Pod:"calico-apiserver-64fd9bf59-fr8st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali01106e0c217", MAC:"ae:39:5d:a1:e2:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:40.189042 containerd[1454]: 2026-04-17 23:45:40.181 [INFO][4537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-fr8st" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:40.258033 containerd[1454]: time="2026-04-17T23:45:40.253718196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:40.258033 containerd[1454]: time="2026-04-17T23:45:40.253830406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:40.258033 containerd[1454]: time="2026-04-17T23:45:40.253907594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:40.258033 containerd[1454]: time="2026-04-17T23:45:40.254083497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:40.273892 systemd-networkd[1365]: caliac4b8cf1b21: Link UP Apr 17 23:45:40.284184 systemd-networkd[1365]: caliac4b8cf1b21: Gained carrier Apr 17 23:45:40.289197 containerd[1454]: time="2026-04-17T23:45:40.284317413Z" level=info msg="StopPodSandbox for \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\"" Apr 17 23:45:40.340502 systemd-networkd[1365]: cali74d2dc65d57: Gained IPv6LL Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:39.861 [INFO][4547] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0 csi-node-driver- calico-system 175da1ed-b0db-4d24-bad8-f8db619e26a8 1019 0 2026-04-17 23:45:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18 csi-node-driver-q4qpd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliac4b8cf1b21 [] [] }} ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Namespace="calico-system" Pod="csi-node-driver-q4qpd" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:39.861 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Namespace="calico-system" Pod="csi-node-driver-q4qpd" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:39.953 [INFO][4576] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" HandleID="k8s-pod-network.930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:39.971 [INFO][4576] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" HandleID="k8s-pod-network.930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", "pod":"csi-node-driver-q4qpd", "timestamp":"2026-04-17 23:45:39.953657882 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:39.971 [INFO][4576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.110 [INFO][4576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.111 [INFO][4576] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.132 [INFO][4576] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.157 [INFO][4576] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.178 [INFO][4576] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.188 [INFO][4576] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.200 [INFO][4576] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.201 [INFO][4576] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.208 [INFO][4576] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.219 [INFO][4576] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.240 [INFO][4576] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.6/26] block=192.168.29.0/26 handle="k8s-pod-network.930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.241 [INFO][4576] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.6/26] handle="k8s-pod-network.930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.241 [INFO][4576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:40.373944 containerd[1454]: 2026-04-17 23:45:40.242 [INFO][4576] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.6/26] IPv6=[] ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" HandleID="k8s-pod-network.930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:40.375125 containerd[1454]: 2026-04-17 23:45:40.254 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Namespace="calico-system" Pod="csi-node-driver-q4qpd" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"175da1ed-b0db-4d24-bad8-f8db619e26a8", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"", Pod:"csi-node-driver-q4qpd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac4b8cf1b21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:40.375125 containerd[1454]: 2026-04-17 23:45:40.254 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.6/32] ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Namespace="calico-system" Pod="csi-node-driver-q4qpd" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:40.375125 containerd[1454]: 2026-04-17 23:45:40.254 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac4b8cf1b21 ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Namespace="calico-system" Pod="csi-node-driver-q4qpd" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:40.375125 containerd[1454]: 2026-04-17 23:45:40.306 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Namespace="calico-system" Pod="csi-node-driver-q4qpd" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:40.375125 containerd[1454]: 2026-04-17 23:45:40.308 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Namespace="calico-system" Pod="csi-node-driver-q4qpd" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"175da1ed-b0db-4d24-bad8-f8db619e26a8", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b", Pod:"csi-node-driver-q4qpd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac4b8cf1b21", MAC:"6e:c3:f7:96:60:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:40.375125 containerd[1454]: 2026-04-17 23:45:40.341 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b" Namespace="calico-system" Pod="csi-node-driver-q4qpd" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:40.390524 containerd[1454]: time="2026-04-17T23:45:40.388892285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dt4x5,Uid:061fc661-7623-4bb3-8cee-51fca4a6f0d4,Namespace:kube-system,Attempt:1,} returns sandbox id \"e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e\"" Apr 17 23:45:40.403649 systemd[1]: Started cri-containerd-21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1.scope - libcontainer container 21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1. Apr 17 23:45:40.418811 containerd[1454]: time="2026-04-17T23:45:40.418564563Z" level=info msg="CreateContainer within sandbox \"e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:45:40.474837 containerd[1454]: time="2026-04-17T23:45:40.472744614Z" level=info msg="CreateContainer within sandbox \"e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f82ee8db4ea40007b76a7a9ea6ed69ca32f8ab00fffafbc96870a284aa098b08\"" Apr 17 23:45:40.479850 containerd[1454]: time="2026-04-17T23:45:40.476520812Z" level=info msg="StartContainer for \"f82ee8db4ea40007b76a7a9ea6ed69ca32f8ab00fffafbc96870a284aa098b08\"" Apr 17 23:45:40.482380 systemd[1]: run-netns-cni\x2d8f523b5f\x2da1c9\x2d1d58\x2dc0dc\x2da24a8eb62a21.mount: Deactivated successfully. Apr 17 23:45:40.556197 containerd[1454]: time="2026-04-17T23:45:40.556065115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:40.556535 containerd[1454]: time="2026-04-17T23:45:40.556491983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:40.556706 containerd[1454]: time="2026-04-17T23:45:40.556671136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:40.557990 containerd[1454]: time="2026-04-17T23:45:40.557923963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:40.650084 systemd[1]: Started cri-containerd-930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b.scope - libcontainer container 930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b. Apr 17 23:45:40.654703 systemd[1]: Started cri-containerd-f82ee8db4ea40007b76a7a9ea6ed69ca32f8ab00fffafbc96870a284aa098b08.scope - libcontainer container f82ee8db4ea40007b76a7a9ea6ed69ca32f8ab00fffafbc96870a284aa098b08. Apr 17 23:45:40.786604 containerd[1454]: time="2026-04-17T23:45:40.786543858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q4qpd,Uid:175da1ed-b0db-4d24-bad8-f8db619e26a8,Namespace:calico-system,Attempt:1,} returns sandbox id \"930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b\"" Apr 17 23:45:40.787173 containerd[1454]: time="2026-04-17T23:45:40.787138526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fd9bf59-fr8st,Uid:e5226537-bc0c-470a-98d4-4745df18b74f,Namespace:calico-system,Attempt:1,} returns sandbox id \"21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1\"" Apr 17 23:45:40.803458 containerd[1454]: time="2026-04-17T23:45:40.803404645Z" level=info msg="StartContainer for \"f82ee8db4ea40007b76a7a9ea6ed69ca32f8ab00fffafbc96870a284aa098b08\" returns successfully" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.686 [INFO][4681] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.688 [INFO][4681] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" iface="eth0" netns="/var/run/netns/cni-fa4387e7-6647-cfa3-4fc5-399f08f201ce" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.689 [INFO][4681] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" iface="eth0" netns="/var/run/netns/cni-fa4387e7-6647-cfa3-4fc5-399f08f201ce" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.690 [INFO][4681] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" iface="eth0" netns="/var/run/netns/cni-fa4387e7-6647-cfa3-4fc5-399f08f201ce" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.690 [INFO][4681] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.690 [INFO][4681] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.835 [INFO][4773] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" HandleID="k8s-pod-network.7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.835 [INFO][4773] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.835 [INFO][4773] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.854 [WARNING][4773] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" HandleID="k8s-pod-network.7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.858 [INFO][4773] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" HandleID="k8s-pod-network.7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.866 [INFO][4773] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:40.879138 containerd[1454]: 2026-04-17 23:45:40.873 [INFO][4681] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f" Apr 17 23:45:40.880230 containerd[1454]: time="2026-04-17T23:45:40.879455206Z" level=info msg="TearDown network for sandbox \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\" successfully" Apr 17 23:45:40.880230 containerd[1454]: time="2026-04-17T23:45:40.879493334Z" level=info msg="StopPodSandbox for \"7618d1442c23e836fcf01337ab30af744c1c29a21f0508dd62e459a5a9cdfe4f\" returns successfully" Apr 17 23:45:40.880995 containerd[1454]: time="2026-04-17T23:45:40.880933101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fd9bf59-bf5g9,Uid:c26d0a75-f7af-4717-af6f-93f123500133,Namespace:calico-system,Attempt:1,}" Apr 17 23:45:41.111792 systemd-networkd[1365]: cali9aa20ca0380: Link UP Apr 17 23:45:41.124429 systemd-networkd[1365]: cali9aa20ca0380: Gained carrier Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:40.986 [INFO][4808] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0 calico-apiserver-64fd9bf59- calico-system c26d0a75-f7af-4717-af6f-93f123500133 1035 0 2026-04-17 23:45:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64fd9bf59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18 calico-apiserver-64fd9bf59-bf5g9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9aa20ca0380 [] [] }} ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-bf5g9" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:40.986 [INFO][4808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-bf5g9" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.039 [INFO][4819] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" HandleID="k8s-pod-network.9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.054 [INFO][4819] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" HandleID="k8s-pod-network.9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", "pod":"calico-apiserver-64fd9bf59-bf5g9", "timestamp":"2026-04-17 23:45:41.039204821 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002851e0)} Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.054 [INFO][4819] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.055 [INFO][4819] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.055 [INFO][4819] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.058 [INFO][4819] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.064 [INFO][4819] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.072 [INFO][4819] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.075 [INFO][4819] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.080 [INFO][4819] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.080 [INFO][4819] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.082 [INFO][4819] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8 Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.091 [INFO][4819] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.103 [INFO][4819] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.7/26] block=192.168.29.0/26 handle="k8s-pod-network.9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.103 [INFO][4819] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.7/26] handle="k8s-pod-network.9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.103 [INFO][4819] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:41.148373 containerd[1454]: 2026-04-17 23:45:41.103 [INFO][4819] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.7/26] IPv6=[] ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" HandleID="k8s-pod-network.9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:41.157743 containerd[1454]: 2026-04-17 23:45:41.106 [INFO][4808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-bf5g9" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0", GenerateName:"calico-apiserver-64fd9bf59-", Namespace:"calico-system", SelfLink:"", UID:"c26d0a75-f7af-4717-af6f-93f123500133", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fd9bf59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"", Pod:"calico-apiserver-64fd9bf59-bf5g9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9aa20ca0380", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:41.157743 containerd[1454]: 2026-04-17 23:45:41.106 [INFO][4808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.7/32] ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-bf5g9" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:41.157743 containerd[1454]: 2026-04-17 23:45:41.107 [INFO][4808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9aa20ca0380 ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-bf5g9" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:41.157743 containerd[1454]: 2026-04-17 23:45:41.113 [INFO][4808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-bf5g9" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:41.157743 containerd[1454]: 2026-04-17 23:45:41.113 [INFO][4808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-bf5g9" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0", GenerateName:"calico-apiserver-64fd9bf59-", Namespace:"calico-system", SelfLink:"", UID:"c26d0a75-f7af-4717-af6f-93f123500133", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fd9bf59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8", Pod:"calico-apiserver-64fd9bf59-bf5g9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9aa20ca0380", MAC:"ca:11:8f:c4:6e:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:41.157743 containerd[1454]: 2026-04-17 23:45:41.135 [INFO][4808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8" Namespace="calico-system" Pod="calico-apiserver-64fd9bf59-bf5g9" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--bf5g9-eth0" Apr 17 23:45:41.296071 containerd[1454]: time="2026-04-17T23:45:41.295967720Z" level=info msg="StopPodSandbox for \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\"" Apr 17 23:45:41.302484 containerd[1454]: time="2026-04-17T23:45:41.302433306Z" level=info msg="StopPodSandbox for \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\"" Apr 17 23:45:41.330808 containerd[1454]: time="2026-04-17T23:45:41.330549509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:41.332828 containerd[1454]: time="2026-04-17T23:45:41.331573602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:41.332828 containerd[1454]: time="2026-04-17T23:45:41.331876743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:41.332828 containerd[1454]: time="2026-04-17T23:45:41.332275737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:41.419408 systemd[1]: Started cri-containerd-9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8.scope - libcontainer container 9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8. Apr 17 23:45:41.429571 systemd-networkd[1365]: cali975334b4d6f: Gained IPv6LL Apr 17 23:45:41.431185 systemd-networkd[1365]: cali01106e0c217: Gained IPv6LL Apr 17 23:45:41.477684 systemd[1]: run-netns-cni\x2dfa4387e7\x2d6647\x2dcfa3\x2d4fc5\x2d399f08f201ce.mount: Deactivated successfully. Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.511 [INFO][4888] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.511 [INFO][4888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" iface="eth0" netns="/var/run/netns/cni-e93bbc42-cd62-2480-eeca-05eff4539959" Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.512 [INFO][4888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" iface="eth0" netns="/var/run/netns/cni-e93bbc42-cd62-2480-eeca-05eff4539959" Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.513 [INFO][4888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" iface="eth0" netns="/var/run/netns/cni-e93bbc42-cd62-2480-eeca-05eff4539959" Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.513 [INFO][4888] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.513 [INFO][4888] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.629 [INFO][4917] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" HandleID="k8s-pod-network.cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.630 [INFO][4917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.630 [INFO][4917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.662 [WARNING][4917] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" HandleID="k8s-pod-network.cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.662 [INFO][4917] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" HandleID="k8s-pod-network.cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.666 [INFO][4917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:41.682172 containerd[1454]: 2026-04-17 23:45:41.673 [INFO][4888] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b" Apr 17 23:45:41.687474 containerd[1454]: time="2026-04-17T23:45:41.682861459Z" level=info msg="TearDown network for sandbox \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\" successfully" Apr 17 23:45:41.687474 containerd[1454]: time="2026-04-17T23:45:41.682903516Z" level=info msg="StopPodSandbox for \"cb739c8086952e4134b0e3d8170d8a5d2c39c5687f7e26ccbeaada34b13d3c8b\" returns successfully" Apr 17 23:45:41.691470 containerd[1454]: time="2026-04-17T23:45:41.689271219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fddcj,Uid:12f85385-d410-47a5-885e-f33eab72be77,Namespace:kube-system,Attempt:1,}" Apr 17 23:45:41.692018 systemd[1]: run-netns-cni\x2de93bbc42\x2dcd62\x2d2480\x2deeca\x2d05eff4539959.mount: Deactivated successfully. Apr 17 23:45:41.709633 containerd[1454]: time="2026-04-17T23:45:41.709567301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64fd9bf59-bf5g9,Uid:c26d0a75-f7af-4717-af6f-93f123500133,Namespace:calico-system,Attempt:1,} returns sandbox id \"9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8\"" Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.540 [WARNING][4868] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0", GenerateName:"calico-apiserver-64fd9bf59-", Namespace:"calico-system", SelfLink:"", UID:"e5226537-bc0c-470a-98d4-4745df18b74f", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fd9bf59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1", Pod:"calico-apiserver-64fd9bf59-fr8st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali01106e0c217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.540 [INFO][4868] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.540 [INFO][4868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" iface="eth0" netns="" Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.540 [INFO][4868] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.540 [INFO][4868] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.649 [INFO][4924] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" HandleID="k8s-pod-network.83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.649 [INFO][4924] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.668 [INFO][4924] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.701 [WARNING][4924] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" HandleID="k8s-pod-network.83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.701 [INFO][4924] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" HandleID="k8s-pod-network.83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.708 [INFO][4924] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:41.745248 containerd[1454]: 2026-04-17 23:45:41.724 [INFO][4868] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:41.750324 containerd[1454]: time="2026-04-17T23:45:41.745274725Z" level=info msg="TearDown network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\" successfully" Apr 17 23:45:41.750324 containerd[1454]: time="2026-04-17T23:45:41.745311304Z" level=info msg="StopPodSandbox for \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\" returns successfully" Apr 17 23:45:41.750324 containerd[1454]: time="2026-04-17T23:45:41.747156866Z" level=info msg="RemovePodSandbox for \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\"" Apr 17 23:45:41.750324 containerd[1454]: time="2026-04-17T23:45:41.747855554Z" level=info msg="Forcibly stopping sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\"" Apr 17 23:45:41.751225 systemd-networkd[1365]: caliac4b8cf1b21: Gained IPv6LL Apr 17 23:45:41.884787 kubelet[2580]: I0417 23:45:41.883808 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dt4x5" podStartSLOduration=55.883734243 podStartE2EDuration="55.883734243s" podCreationTimestamp="2026-04-17 23:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:45:41.854096936 +0000 UTC m=+60.751735391" watchObservedRunningTime="2026-04-17 23:45:41.883734243 +0000 UTC m=+60.781372677" Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:41.994 [WARNING][4960] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0", GenerateName:"calico-apiserver-64fd9bf59-", Namespace:"calico-system", SelfLink:"", UID:"e5226537-bc0c-470a-98d4-4745df18b74f", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64fd9bf59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1", Pod:"calico-apiserver-64fd9bf59-fr8st", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali01106e0c217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:41.995 [INFO][4960] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:41.996 [INFO][4960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" iface="eth0" netns="" Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:41.996 [INFO][4960] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:41.996 [INFO][4960] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:42.082 [INFO][4981] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" HandleID="k8s-pod-network.83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:42.082 [INFO][4981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:42.083 [INFO][4981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:42.101 [WARNING][4981] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" HandleID="k8s-pod-network.83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:42.103 [INFO][4981] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" HandleID="k8s-pod-network.83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--apiserver--64fd9bf59--fr8st-eth0" Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:42.105 [INFO][4981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:42.114982 containerd[1454]: 2026-04-17 23:45:42.108 [INFO][4960] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77" Apr 17 23:45:42.117614 containerd[1454]: time="2026-04-17T23:45:42.114944957Z" level=info msg="TearDown network for sandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\" successfully" Apr 17 23:45:42.129574 containerd[1454]: time="2026-04-17T23:45:42.128905400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:45:42.129574 containerd[1454]: time="2026-04-17T23:45:42.129001259Z" level=info msg="RemovePodSandbox \"83e8ba8bc50b5acb32ea2d04e4fcd6a0723c0d9267a667c6a22d8b6d99a64e77\" returns successfully" Apr 17 23:45:42.130835 containerd[1454]: time="2026-04-17T23:45:42.130796802Z" level=info msg="StopPodSandbox for \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\"" Apr 17 23:45:42.188880 systemd-networkd[1365]: calia3af9a1a55b: Link UP Apr 17 23:45:42.193968 systemd-networkd[1365]: calia3af9a1a55b: Gained carrier Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:41.944 [INFO][4941] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0 coredns-674b8bbfcf- kube-system 12f85385-d410-47a5-885e-f33eab72be77 1045 0 2026-04-17 23:44:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18 coredns-674b8bbfcf-fddcj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3af9a1a55b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-fddcj" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:41.945 [INFO][4941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-fddcj" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.088 [INFO][4974] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" HandleID="k8s-pod-network.4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.109 [INFO][4974] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" HandleID="k8s-pod-network.4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a9ec0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", "pod":"coredns-674b8bbfcf-fddcj", "timestamp":"2026-04-17 23:45:42.088425559 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000370420)} Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.110 [INFO][4974] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.110 [INFO][4974] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.110 [INFO][4974] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18' Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.115 [INFO][4974] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.125 [INFO][4974] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.134 [INFO][4974] ipam/ipam.go 526: Trying affinity for 192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.138 [INFO][4974] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.145 [INFO][4974] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.0/26 host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.145 [INFO][4974] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.0/26 handle="k8s-pod-network.4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.148 [INFO][4974] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6 Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.155 [INFO][4974] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.0/26 handle="k8s-pod-network.4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.175 [INFO][4974] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.8/26] block=192.168.29.0/26 handle="k8s-pod-network.4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.175 [INFO][4974] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.8/26] handle="k8s-pod-network.4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" host="ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18" Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.175 [INFO][4974] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:42.250071 containerd[1454]: 2026-04-17 23:45:42.175 [INFO][4974] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.8/26] IPv6=[] ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" HandleID="k8s-pod-network.4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:42.251271 containerd[1454]: 2026-04-17 23:45:42.179 [INFO][4941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-fddcj" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"12f85385-d410-47a5-885e-f33eab72be77", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"", Pod:"coredns-674b8bbfcf-fddcj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3af9a1a55b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:42.251271 containerd[1454]: 2026-04-17 23:45:42.179 [INFO][4941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.8/32] ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-fddcj" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:42.251271 containerd[1454]: 2026-04-17 23:45:42.179 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3af9a1a55b ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-fddcj" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:42.251271 containerd[1454]: 2026-04-17 23:45:42.199 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-fddcj" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:42.251271 containerd[1454]: 2026-04-17 23:45:42.205 [INFO][4941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-fddcj" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"12f85385-d410-47a5-885e-f33eab72be77", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6", Pod:"coredns-674b8bbfcf-fddcj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3af9a1a55b", MAC:"96:8c:ea:13:7b:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:42.251271 containerd[1454]: 2026-04-17 23:45:42.238 [INFO][4941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6" Namespace="kube-system" Pod="coredns-674b8bbfcf-fddcj" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--fddcj-eth0" Apr 17 23:45:42.330482 containerd[1454]: time="2026-04-17T23:45:42.329435315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:42.330482 containerd[1454]: time="2026-04-17T23:45:42.329571296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:42.330482 containerd[1454]: time="2026-04-17T23:45:42.329624116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:42.330482 containerd[1454]: time="2026-04-17T23:45:42.330122761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:42.393223 systemd[1]: Started cri-containerd-4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6.scope - libcontainer container 4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6. Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.285 [WARNING][4999] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"061fc661-7623-4bb3-8cee-51fca4a6f0d4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e", Pod:"coredns-674b8bbfcf-dt4x5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali975334b4d6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.286 [INFO][4999] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.286 [INFO][4999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" iface="eth0" netns="" Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.286 [INFO][4999] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.286 [INFO][4999] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.429 [INFO][5024] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" HandleID="k8s-pod-network.feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.430 [INFO][5024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.430 [INFO][5024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.445 [WARNING][5024] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" HandleID="k8s-pod-network.feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.445 [INFO][5024] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" HandleID="k8s-pod-network.feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.450 [INFO][5024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:42.483278 containerd[1454]: 2026-04-17 23:45:42.470 [INFO][4999] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:42.485771 containerd[1454]: time="2026-04-17T23:45:42.485449592Z" level=info msg="TearDown network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\" successfully" Apr 17 23:45:42.485771 containerd[1454]: time="2026-04-17T23:45:42.485521609Z" level=info msg="StopPodSandbox for \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\" returns successfully" Apr 17 23:45:42.488959 containerd[1454]: time="2026-04-17T23:45:42.486513945Z" level=info msg="RemovePodSandbox for \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\"" Apr 17 23:45:42.488959 containerd[1454]: time="2026-04-17T23:45:42.486562766Z" level=info msg="Forcibly stopping sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\"" Apr 17 23:45:42.565588 containerd[1454]: time="2026-04-17T23:45:42.565392593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fddcj,Uid:12f85385-d410-47a5-885e-f33eab72be77,Namespace:kube-system,Attempt:1,} returns sandbox id \"4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6\"" Apr 17 23:45:42.576630 containerd[1454]: time="2026-04-17T23:45:42.576547605Z" level=info msg="CreateContainer within sandbox \"4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:45:42.610996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805085742.mount: Deactivated successfully. Apr 17 23:45:42.619367 containerd[1454]: time="2026-04-17T23:45:42.619291471Z" level=info msg="CreateContainer within sandbox \"4112cb540a4bd64e456b5c6b9260ceabc3b433b0af003946b0d4f2f2651f31c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ec828c874976b2a302aee265da45d58d0fdc1defcf4b81d525928cc44bf7c3c\"" Apr 17 23:45:42.621776 containerd[1454]: time="2026-04-17T23:45:42.621557397Z" level=info msg="StartContainer for \"8ec828c874976b2a302aee265da45d58d0fdc1defcf4b81d525928cc44bf7c3c\"" Apr 17 23:45:42.703121 systemd[1]: Started cri-containerd-8ec828c874976b2a302aee265da45d58d0fdc1defcf4b81d525928cc44bf7c3c.scope - libcontainer container 8ec828c874976b2a302aee265da45d58d0fdc1defcf4b81d525928cc44bf7c3c. Apr 17 23:45:42.822076 containerd[1454]: time="2026-04-17T23:45:42.821703750Z" level=info msg="StartContainer for \"8ec828c874976b2a302aee265da45d58d0fdc1defcf4b81d525928cc44bf7c3c\" returns successfully" Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.701 [WARNING][5069] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"061fc661-7623-4bb3-8cee-51fca4a6f0d4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 44, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"e57d9408cd6250483ec96856e1ecd693059ee966edcd6ac8941e7813857af21e", Pod:"coredns-674b8bbfcf-dt4x5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali975334b4d6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.702 [INFO][5069] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.702 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" iface="eth0" netns="" Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.702 [INFO][5069] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.702 [INFO][5069] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.796 [INFO][5102] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" HandleID="k8s-pod-network.feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.797 [INFO][5102] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.797 [INFO][5102] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.811 [WARNING][5102] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" HandleID="k8s-pod-network.feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.812 [INFO][5102] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" HandleID="k8s-pod-network.feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-coredns--674b8bbfcf--dt4x5-eth0" Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.816 [INFO][5102] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:42.833339 containerd[1454]: 2026-04-17 23:45:42.824 [INFO][5069] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2" Apr 17 23:45:42.833339 containerd[1454]: time="2026-04-17T23:45:42.833276984Z" level=info msg="TearDown network for sandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\" successfully" Apr 17 23:45:42.849430 containerd[1454]: time="2026-04-17T23:45:42.847465834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:45:42.849430 containerd[1454]: time="2026-04-17T23:45:42.847562627Z" level=info msg="RemovePodSandbox \"feac6d4c5064bae1ecae1b5b6dbda910d7c6a05041dcfd85c2cfd91accb7b8e2\" returns successfully" Apr 17 23:45:42.850258 containerd[1454]: time="2026-04-17T23:45:42.850193272Z" level=info msg="StopPodSandbox for \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\"" Apr 17 23:45:42.965130 systemd-networkd[1365]: cali9aa20ca0380: Gained IPv6LL Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.036 [WARNING][5137] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.037 [INFO][5137] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.037 [INFO][5137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" iface="eth0" netns="" Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.037 [INFO][5137] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.037 [INFO][5137] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.114 [INFO][5149] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" HandleID="k8s-pod-network.6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.114 [INFO][5149] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.114 [INFO][5149] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.126 [WARNING][5149] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" HandleID="k8s-pod-network.6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.126 [INFO][5149] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" HandleID="k8s-pod-network.6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.129 [INFO][5149] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:43.138446 containerd[1454]: 2026-04-17 23:45:43.134 [INFO][5137] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:43.138446 containerd[1454]: time="2026-04-17T23:45:43.137900995Z" level=info msg="TearDown network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\" successfully" Apr 17 23:45:43.138446 containerd[1454]: time="2026-04-17T23:45:43.137943383Z" level=info msg="StopPodSandbox for \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\" returns successfully" Apr 17 23:45:43.139314 containerd[1454]: time="2026-04-17T23:45:43.138595480Z" level=info msg="RemovePodSandbox for \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\"" Apr 17 23:45:43.139314 containerd[1454]: time="2026-04-17T23:45:43.138641426Z" level=info msg="Forcibly stopping sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\"" Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.213 [WARNING][5165] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" WorkloadEndpoint="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.213 [INFO][5165] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.213 [INFO][5165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" iface="eth0" netns="" Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.213 [INFO][5165] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.213 [INFO][5165] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.263 [INFO][5173] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" HandleID="k8s-pod-network.6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.263 [INFO][5173] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.263 [INFO][5173] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.275 [WARNING][5173] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" HandleID="k8s-pod-network.6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.275 [INFO][5173] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" HandleID="k8s-pod-network.6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-whisker--5ff9bf5f47--pb5tr-eth0" Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.279 [INFO][5173] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:43.289593 containerd[1454]: 2026-04-17 23:45:43.284 [INFO][5165] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e" Apr 17 23:45:43.289593 containerd[1454]: time="2026-04-17T23:45:43.288401449Z" level=info msg="TearDown network for sandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\" successfully" Apr 17 23:45:43.295029 containerd[1454]: time="2026-04-17T23:45:43.294876254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:45:43.295305 containerd[1454]: time="2026-04-17T23:45:43.295255791Z" level=info msg="RemovePodSandbox \"6327f4bb9fa3d487837a2de884360e46bcbbb7308170d92df9821b310b444f5e\" returns successfully" Apr 17 23:45:43.297053 containerd[1454]: time="2026-04-17T23:45:43.297020058Z" level=info msg="StopPodSandbox for \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\"" Apr 17 23:45:43.466270 systemd[1]: run-containerd-runc-k8s.io-8ec828c874976b2a302aee265da45d58d0fdc1defcf4b81d525928cc44bf7c3c-runc.cLjqCx.mount: Deactivated successfully. Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.387 [WARNING][5188] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"175da1ed-b0db-4d24-bad8-f8db619e26a8", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b", Pod:"csi-node-driver-q4qpd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac4b8cf1b21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.388 [INFO][5188] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.388 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" iface="eth0" netns="" Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.388 [INFO][5188] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.388 [INFO][5188] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.469 [INFO][5195] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" HandleID="k8s-pod-network.27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.469 [INFO][5195] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.469 [INFO][5195] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.486 [WARNING][5195] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" HandleID="k8s-pod-network.27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.486 [INFO][5195] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" HandleID="k8s-pod-network.27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.489 [INFO][5195] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:43.499779 containerd[1454]: 2026-04-17 23:45:43.494 [INFO][5188] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:43.501567 containerd[1454]: time="2026-04-17T23:45:43.500346453Z" level=info msg="TearDown network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\" successfully" Apr 17 23:45:43.501567 containerd[1454]: time="2026-04-17T23:45:43.500605899Z" level=info msg="StopPodSandbox for \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\" returns successfully" Apr 17 23:45:43.503003 containerd[1454]: time="2026-04-17T23:45:43.502512841Z" level=info msg="RemovePodSandbox for \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\"" Apr 17 23:45:43.503003 containerd[1454]: time="2026-04-17T23:45:43.502682288Z" level=info msg="Forcibly stopping sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\"" Apr 17 23:45:43.540320 systemd-networkd[1365]: calia3af9a1a55b: Gained IPv6LL Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.611 [WARNING][5209] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"175da1ed-b0db-4d24-bad8-f8db619e26a8", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b", Pod:"csi-node-driver-q4qpd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac4b8cf1b21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.612 [INFO][5209] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.612 [INFO][5209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" iface="eth0" netns="" Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.612 [INFO][5209] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.612 [INFO][5209] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.667 [INFO][5221] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" HandleID="k8s-pod-network.27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.668 [INFO][5221] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.668 [INFO][5221] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.684 [WARNING][5221] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" HandleID="k8s-pod-network.27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.684 [INFO][5221] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" HandleID="k8s-pod-network.27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-csi--node--driver--q4qpd-eth0" Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.686 [INFO][5221] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:43.694871 containerd[1454]: 2026-04-17 23:45:43.689 [INFO][5209] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8" Apr 17 23:45:43.694871 containerd[1454]: time="2026-04-17T23:45:43.694190462Z" level=info msg="TearDown network for sandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\" successfully" Apr 17 23:45:43.702300 containerd[1454]: time="2026-04-17T23:45:43.702034325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:45:43.702300 containerd[1454]: time="2026-04-17T23:45:43.702147679Z" level=info msg="RemovePodSandbox \"27bd3cc1880a584587399c3866ae029deea5b7d3527437a00885408c4c20fdd8\" returns successfully" Apr 17 23:45:43.704194 containerd[1454]: time="2026-04-17T23:45:43.703722273Z" level=info msg="StopPodSandbox for \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\"" Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.781 [WARNING][5236] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0", GenerateName:"calico-kube-controllers-76dd74c988-", Namespace:"calico-system", SelfLink:"", UID:"f7b55c10-3440-4ee3-957d-61f422c06f09", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76dd74c988", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3", Pod:"calico-kube-controllers-76dd74c988-x7l4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia45be848ef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.782 [INFO][5236] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.782 [INFO][5236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" iface="eth0" netns="" Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.782 [INFO][5236] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.782 [INFO][5236] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.826 [INFO][5243] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" HandleID="k8s-pod-network.95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.827 [INFO][5243] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.827 [INFO][5243] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.844 [WARNING][5243] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" HandleID="k8s-pod-network.95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.844 [INFO][5243] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" HandleID="k8s-pod-network.95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.848 [INFO][5243] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:43.853911 containerd[1454]: 2026-04-17 23:45:43.850 [INFO][5236] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:43.853911 containerd[1454]: time="2026-04-17T23:45:43.853880402Z" level=info msg="TearDown network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\" successfully" Apr 17 23:45:43.855644 containerd[1454]: time="2026-04-17T23:45:43.853916178Z" level=info msg="StopPodSandbox for \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\" returns successfully" Apr 17 23:45:43.856798 containerd[1454]: time="2026-04-17T23:45:43.856517253Z" level=info msg="RemovePodSandbox for \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\"" Apr 17 23:45:43.856798 containerd[1454]: time="2026-04-17T23:45:43.856567190Z" level=info msg="Forcibly stopping sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\"" Apr 17 23:45:43.939102 kubelet[2580]: I0417 23:45:43.938398 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fddcj" podStartSLOduration=57.938369367 podStartE2EDuration="57.938369367s" podCreationTimestamp="2026-04-17 23:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:45:42.918948964 +0000 UTC m=+61.816587398" watchObservedRunningTime="2026-04-17 23:45:43.938369367 +0000 UTC m=+62.836007802" Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.001 [WARNING][5258] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0", GenerateName:"calico-kube-controllers-76dd74c988-", Namespace:"calico-system", SelfLink:"", UID:"f7b55c10-3440-4ee3-957d-61f422c06f09", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76dd74c988", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3", Pod:"calico-kube-controllers-76dd74c988-x7l4x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia45be848ef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.003 [INFO][5258] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.003 [INFO][5258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" iface="eth0" netns="" Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.003 [INFO][5258] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.003 [INFO][5258] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.068 [INFO][5266] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" HandleID="k8s-pod-network.95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.068 [INFO][5266] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.068 [INFO][5266] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.094 [WARNING][5266] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" HandleID="k8s-pod-network.95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.094 [INFO][5266] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" HandleID="k8s-pod-network.95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-calico--kube--controllers--76dd74c988--x7l4x-eth0" Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.097 [INFO][5266] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:44.104818 containerd[1454]: 2026-04-17 23:45:44.101 [INFO][5258] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3" Apr 17 23:45:44.104818 containerd[1454]: time="2026-04-17T23:45:44.104406688Z" level=info msg="TearDown network for sandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\" successfully" Apr 17 23:45:44.113821 containerd[1454]: time="2026-04-17T23:45:44.113227228Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:45:44.114202 containerd[1454]: time="2026-04-17T23:45:44.113926951Z" level=info msg="RemovePodSandbox \"95d8c0551203f5b791f0fe6f6726f1a206d82bb38b263a6966dbb1782e4d68a3\" returns successfully" Apr 17 23:45:44.115321 containerd[1454]: time="2026-04-17T23:45:44.115283258Z" level=info msg="StopPodSandbox for \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\"" Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.198 [WARNING][5283] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"31cae8c7-c29a-40c1-b51e-df324d3ffd96", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756", Pod:"goldmane-5b85766d88-57wrg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali74d2dc65d57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.198 [INFO][5283] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.199 [INFO][5283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" iface="eth0" netns="" Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.199 [INFO][5283] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.199 [INFO][5283] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.255 [INFO][5291] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" HandleID="k8s-pod-network.4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.255 [INFO][5291] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.256 [INFO][5291] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.275 [WARNING][5291] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" HandleID="k8s-pod-network.4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.275 [INFO][5291] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" HandleID="k8s-pod-network.4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.280 [INFO][5291] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:44.288437 containerd[1454]: 2026-04-17 23:45:44.284 [INFO][5283] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:44.289973 containerd[1454]: time="2026-04-17T23:45:44.288495435Z" level=info msg="TearDown network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\" successfully" Apr 17 23:45:44.289973 containerd[1454]: time="2026-04-17T23:45:44.288527240Z" level=info msg="StopPodSandbox for \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\" returns successfully" Apr 17 23:45:44.291161 containerd[1454]: time="2026-04-17T23:45:44.290369109Z" level=info msg="RemovePodSandbox for \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\"" Apr 17 23:45:44.291161 containerd[1454]: time="2026-04-17T23:45:44.290415819Z" level=info msg="Forcibly stopping sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\"" Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.378 [WARNING][5306] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"31cae8c7-c29a-40c1-b51e-df324d3ffd96", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 45, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260417-2100-4f86d70245d86af02b18", ContainerID:"b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756", Pod:"goldmane-5b85766d88-57wrg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali74d2dc65d57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.378 [INFO][5306] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.378 [INFO][5306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" iface="eth0" netns="" Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.378 [INFO][5306] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.378 [INFO][5306] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.449 [INFO][5313] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" HandleID="k8s-pod-network.4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.449 [INFO][5313] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.449 [INFO][5313] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.463 [WARNING][5313] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" HandleID="k8s-pod-network.4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.463 [INFO][5313] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" HandleID="k8s-pod-network.4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Workload="ci--4081--3--6--nightly--20260417--2100--4f86d70245d86af02b18-k8s-goldmane--5b85766d88--57wrg-eth0" Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.466 [INFO][5313] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:45:44.473304 containerd[1454]: 2026-04-17 23:45:44.471 [INFO][5306] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8" Apr 17 23:45:44.475125 containerd[1454]: time="2026-04-17T23:45:44.473317695Z" level=info msg="TearDown network for sandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\" successfully" Apr 17 23:45:44.481018 containerd[1454]: time="2026-04-17T23:45:44.480945976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:45:44.481189 containerd[1454]: time="2026-04-17T23:45:44.481054967Z" level=info msg="RemovePodSandbox \"4d372ae17dc52f1a34be6c67aaa4c18a5026b1b736078592d167095736599ca8\" returns successfully" Apr 17 23:45:44.565848 containerd[1454]: time="2026-04-17T23:45:44.565774674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:44.567555 containerd[1454]: time="2026-04-17T23:45:44.567468508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:45:44.569362 containerd[1454]: time="2026-04-17T23:45:44.569286597Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:44.572974 containerd[1454]: time="2026-04-17T23:45:44.572932025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:44.574359 containerd[1454]: time="2026-04-17T23:45:44.574083411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.528043082s" Apr 17 23:45:44.574359 containerd[1454]: time="2026-04-17T23:45:44.574199593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:45:44.576193 containerd[1454]: time="2026-04-17T23:45:44.576073458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:45:44.616452 containerd[1454]: time="2026-04-17T23:45:44.616382290Z" level=info msg="CreateContainer within sandbox \"5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:45:44.642676 containerd[1454]: time="2026-04-17T23:45:44.642595218Z" level=info msg="CreateContainer within sandbox \"5247169a4998c500ab7772373651ec55c6a8a13f8f2f3fc1fa29735e8bd1acb3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"224ab84675c7ed6ec4f8d183688f4916f430f62d7ff13302d715dd8d985e01bb\"" Apr 17 23:45:44.650456 containerd[1454]: time="2026-04-17T23:45:44.650399592Z" level=info msg="StartContainer for \"224ab84675c7ed6ec4f8d183688f4916f430f62d7ff13302d715dd8d985e01bb\"" Apr 17 23:45:44.706036 systemd[1]: Started cri-containerd-224ab84675c7ed6ec4f8d183688f4916f430f62d7ff13302d715dd8d985e01bb.scope - libcontainer container 224ab84675c7ed6ec4f8d183688f4916f430f62d7ff13302d715dd8d985e01bb. Apr 17 23:45:44.766295 containerd[1454]: time="2026-04-17T23:45:44.766059120Z" level=info msg="StartContainer for \"224ab84675c7ed6ec4f8d183688f4916f430f62d7ff13302d715dd8d985e01bb\" returns successfully" Apr 17 23:45:44.940415 kubelet[2580]: I0417 23:45:44.940104 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76dd74c988-x7l4x" podStartSLOduration=37.410061603 podStartE2EDuration="42.940078907s" podCreationTimestamp="2026-04-17 23:45:02 +0000 UTC" firstStartedPulling="2026-04-17 23:45:39.045541991 +0000 UTC m=+57.943180406" lastFinishedPulling="2026-04-17 23:45:44.575559277 +0000 UTC m=+63.473197710" observedRunningTime="2026-04-17 23:45:44.939695756 +0000 UTC m=+63.837334192" watchObservedRunningTime="2026-04-17 23:45:44.940078907 +0000 UTC m=+63.837717332" Apr 17 23:45:45.749946 ntpd[1423]: Listen normally on 11 calia45be848ef7 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:45:45.751109 ntpd[1423]: 17 Apr 23:45:45 ntpd[1423]: Listen normally on 11 calia45be848ef7 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 17 23:45:45.751109 ntpd[1423]: 17 Apr 23:45:45 ntpd[1423]: Listen normally on 12 cali74d2dc65d57 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:45:45.751109 ntpd[1423]: 17 Apr 23:45:45 ntpd[1423]: Listen normally on 13 cali975334b4d6f [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:45:45.751109 ntpd[1423]: 17 Apr 23:45:45 ntpd[1423]: Listen normally on 14 cali01106e0c217 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:45:45.751109 ntpd[1423]: 17 Apr 23:45:45 ntpd[1423]: Listen normally on 15 caliac4b8cf1b21 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:45:45.751109 ntpd[1423]: 17 Apr 23:45:45 ntpd[1423]: Listen normally on 16 cali9aa20ca0380 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:45:45.751109 ntpd[1423]: 17 Apr 23:45:45 ntpd[1423]: Listen normally on 17 calia3af9a1a55b [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:45:45.750079 ntpd[1423]: Listen normally on 12 cali74d2dc65d57 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 17 23:45:45.750147 ntpd[1423]: Listen normally on 13 cali975334b4d6f [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:45:45.750207 ntpd[1423]: Listen normally on 14 cali01106e0c217 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:45:45.750278 ntpd[1423]: Listen normally on 15 caliac4b8cf1b21 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:45:45.750339 ntpd[1423]: Listen normally on 16 cali9aa20ca0380 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:45:45.750402 ntpd[1423]: Listen normally on 17 calia3af9a1a55b [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:45:46.600125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822505059.mount: Deactivated successfully. Apr 17 23:45:47.234143 containerd[1454]: time="2026-04-17T23:45:47.234041528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:47.235772 containerd[1454]: time="2026-04-17T23:45:47.235678043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:45:47.237354 containerd[1454]: time="2026-04-17T23:45:47.237259436Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:47.244634 containerd[1454]: time="2026-04-17T23:45:47.244545551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:47.248057 containerd[1454]: time="2026-04-17T23:45:47.248007899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.671879847s" Apr 17 23:45:47.248182 containerd[1454]: time="2026-04-17T23:45:47.248061320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:45:47.255210 containerd[1454]: time="2026-04-17T23:45:47.255164961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:45:47.261836 containerd[1454]: time="2026-04-17T23:45:47.261788159Z" level=info msg="CreateContainer within sandbox \"b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:45:47.288653 containerd[1454]: time="2026-04-17T23:45:47.288486669Z" level=info msg="CreateContainer within sandbox \"b99251ceda685e2f3e3a0444fd2bfaad8520ef69a953efb1b74be1e687e23756\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"869b41627a0c171c4fbd38e63719fcb0c5e3907fed4c6fac4c31c906f1b81d21\"" Apr 17 23:45:47.289573 containerd[1454]: time="2026-04-17T23:45:47.289244598Z" level=info msg="StartContainer for \"869b41627a0c171c4fbd38e63719fcb0c5e3907fed4c6fac4c31c906f1b81d21\"" Apr 17 23:45:47.348009 systemd[1]: Started cri-containerd-869b41627a0c171c4fbd38e63719fcb0c5e3907fed4c6fac4c31c906f1b81d21.scope - libcontainer container 869b41627a0c171c4fbd38e63719fcb0c5e3907fed4c6fac4c31c906f1b81d21. Apr 17 23:45:47.428803 containerd[1454]: time="2026-04-17T23:45:47.428691991Z" level=info msg="StartContainer for \"869b41627a0c171c4fbd38e63719fcb0c5e3907fed4c6fac4c31c906f1b81d21\" returns successfully" Apr 17 23:45:48.313660 containerd[1454]: time="2026-04-17T23:45:48.313589913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:48.315244 containerd[1454]: time="2026-04-17T23:45:48.315174790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:45:48.316969 containerd[1454]: time="2026-04-17T23:45:48.316890137Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:48.322094 containerd[1454]: time="2026-04-17T23:45:48.321873940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:48.323815 containerd[1454]: time="2026-04-17T23:45:48.323006008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.067635687s" Apr 17 23:45:48.323815 containerd[1454]: time="2026-04-17T23:45:48.323064599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:45:48.324876 containerd[1454]: time="2026-04-17T23:45:48.324840764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:45:48.331325 containerd[1454]: time="2026-04-17T23:45:48.331278561Z" level=info msg="CreateContainer within sandbox \"930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:45:48.358890 containerd[1454]: time="2026-04-17T23:45:48.358839874Z" level=info msg="CreateContainer within sandbox \"930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8e393548843852491291ce083f634791137271a1f31f5d2479f1fb21cb46663f\"" Apr 17 23:45:48.359713 containerd[1454]: time="2026-04-17T23:45:48.359673622Z" level=info msg="StartContainer for \"8e393548843852491291ce083f634791137271a1f31f5d2479f1fb21cb46663f\"" Apr 17 23:45:48.424955 systemd[1]: Started cri-containerd-8e393548843852491291ce083f634791137271a1f31f5d2479f1fb21cb46663f.scope - libcontainer container 8e393548843852491291ce083f634791137271a1f31f5d2479f1fb21cb46663f. Apr 17 23:45:48.468976 containerd[1454]: time="2026-04-17T23:45:48.468859479Z" level=info msg="StartContainer for \"8e393548843852491291ce083f634791137271a1f31f5d2479f1fb21cb46663f\" returns successfully" Apr 17 23:45:48.976813 systemd[1]: run-containerd-runc-k8s.io-869b41627a0c171c4fbd38e63719fcb0c5e3907fed4c6fac4c31c906f1b81d21-runc.jiIQWB.mount: Deactivated successfully. Apr 17 23:45:50.002009 systemd[1]: run-containerd-runc-k8s.io-869b41627a0c171c4fbd38e63719fcb0c5e3907fed4c6fac4c31c906f1b81d21-runc.6Zzqc8.mount: Deactivated successfully. Apr 17 23:45:50.053135 systemd[1]: Started sshd@8-10.128.0.99:22-50.85.169.122:53486.service - OpenSSH per-connection server daemon (50.85.169.122:53486). Apr 17 23:45:50.764876 sshd[5567]: Accepted publickey for core from 50.85.169.122 port 53486 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:45:50.767360 sshd[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:50.776399 systemd-logind[1441]: New session 8 of user core. Apr 17 23:45:50.783285 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:45:51.191326 containerd[1454]: time="2026-04-17T23:45:51.191246124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:51.194241 containerd[1454]: time="2026-04-17T23:45:51.194105497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:45:51.196710 containerd[1454]: time="2026-04-17T23:45:51.195516475Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:51.207083 containerd[1454]: time="2026-04-17T23:45:51.206554396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:51.210155 containerd[1454]: time="2026-04-17T23:45:51.210091587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.884730154s" Apr 17 23:45:51.211390 containerd[1454]: time="2026-04-17T23:45:51.210914475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:45:51.213027 containerd[1454]: time="2026-04-17T23:45:51.212995302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:45:51.221914 containerd[1454]: time="2026-04-17T23:45:51.221864620Z" level=info msg="CreateContainer within sandbox \"21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:45:51.243823 containerd[1454]: time="2026-04-17T23:45:51.243623684Z" level=info msg="CreateContainer within sandbox \"21cecedad4de71520b2e73509ff47af847b60fa5c998f1f10f01dfcd172bc9d1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fc234689ec47ce884a2b746c6f6ed59730452f886a86a3d50b5f34bfc34c676a\"" Apr 17 23:45:51.249348 containerd[1454]: time="2026-04-17T23:45:51.248911255Z" level=info msg="StartContainer for \"fc234689ec47ce884a2b746c6f6ed59730452f886a86a3d50b5f34bfc34c676a\"" Apr 17 23:45:51.325037 systemd[1]: Started cri-containerd-fc234689ec47ce884a2b746c6f6ed59730452f886a86a3d50b5f34bfc34c676a.scope - libcontainer container fc234689ec47ce884a2b746c6f6ed59730452f886a86a3d50b5f34bfc34c676a. Apr 17 23:45:51.390729 containerd[1454]: time="2026-04-17T23:45:51.390390955Z" level=info msg="StartContainer for \"fc234689ec47ce884a2b746c6f6ed59730452f886a86a3d50b5f34bfc34c676a\" returns successfully" Apr 17 23:45:51.409836 sshd[5567]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:51.427400 systemd[1]: sshd@8-10.128.0.99:22-50.85.169.122:53486.service: Deactivated successfully. Apr 17 23:45:51.433343 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:45:51.437012 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:45:51.443056 systemd-logind[1441]: Removed session 8. Apr 17 23:45:51.451906 containerd[1454]: time="2026-04-17T23:45:51.451257076Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:51.452650 containerd[1454]: time="2026-04-17T23:45:51.452494091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:45:51.458290 containerd[1454]: time="2026-04-17T23:45:51.458174435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 244.979627ms" Apr 17 23:45:51.458290 containerd[1454]: time="2026-04-17T23:45:51.458253131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:45:51.460843 containerd[1454]: time="2026-04-17T23:45:51.460345125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:45:51.467663 containerd[1454]: time="2026-04-17T23:45:51.467503507Z" level=info msg="CreateContainer within sandbox \"9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:45:51.499149 containerd[1454]: time="2026-04-17T23:45:51.497512797Z" level=info msg="CreateContainer within sandbox \"9efe3797719c8c5d0994dab4035c08163ff9a8e4a7f71746ccfa28349e4324f8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6ea151834afe74395ff3203227046ee7c6e58b97cb42de76ae256d565df54f1b\"" Apr 17 23:45:51.502538 containerd[1454]: time="2026-04-17T23:45:51.499684083Z" level=info msg="StartContainer for \"6ea151834afe74395ff3203227046ee7c6e58b97cb42de76ae256d565df54f1b\"" Apr 17 23:45:51.569029 systemd[1]: Started cri-containerd-6ea151834afe74395ff3203227046ee7c6e58b97cb42de76ae256d565df54f1b.scope - libcontainer container 6ea151834afe74395ff3203227046ee7c6e58b97cb42de76ae256d565df54f1b. Apr 17 23:45:51.645828 containerd[1454]: time="2026-04-17T23:45:51.645734917Z" level=info msg="StartContainer for \"6ea151834afe74395ff3203227046ee7c6e58b97cb42de76ae256d565df54f1b\" returns successfully" Apr 17 23:45:52.005458 kubelet[2580]: I0417 23:45:52.004546 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-57wrg" podStartSLOduration=42.863362664 podStartE2EDuration="51.004522977s" podCreationTimestamp="2026-04-17 23:45:01 +0000 UTC" firstStartedPulling="2026-04-17 23:45:39.108513044 +0000 UTC m=+58.006151451" lastFinishedPulling="2026-04-17 23:45:47.249673335 +0000 UTC m=+66.147311764" observedRunningTime="2026-04-17 23:45:47.959335979 +0000 UTC m=+66.856974415" watchObservedRunningTime="2026-04-17 23:45:52.004522977 +0000 UTC m=+70.902161409" Apr 17 23:45:52.005458 kubelet[2580]: I0417 23:45:52.005019 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-64fd9bf59-fr8st" podStartSLOduration=41.587839011 podStartE2EDuration="52.005002749s" podCreationTimestamp="2026-04-17 23:45:00 +0000 UTC" firstStartedPulling="2026-04-17 23:45:40.795461692 +0000 UTC m=+59.693100124" lastFinishedPulling="2026-04-17 23:45:51.212625435 +0000 UTC m=+70.110263862" observedRunningTime="2026-04-17 23:45:51.99646332 +0000 UTC m=+70.894102031" watchObservedRunningTime="2026-04-17 23:45:52.005002749 +0000 UTC m=+70.902641180" Apr 17 23:45:53.154542 containerd[1454]: time="2026-04-17T23:45:53.154475289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:53.157990 containerd[1454]: time="2026-04-17T23:45:53.157922543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:45:53.159471 containerd[1454]: time="2026-04-17T23:45:53.159428145Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:53.165494 containerd[1454]: time="2026-04-17T23:45:53.165439283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.705026733s" Apr 17 23:45:53.165613 containerd[1454]: time="2026-04-17T23:45:53.165499591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:45:53.165613 containerd[1454]: time="2026-04-17T23:45:53.164160841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:53.171880 containerd[1454]: time="2026-04-17T23:45:53.171789960Z" level=info msg="CreateContainer within sandbox \"930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:45:53.193633 containerd[1454]: time="2026-04-17T23:45:53.193560561Z" level=info msg="CreateContainer within sandbox \"930df46d33ea31035cf5a10d9002d175f841d5bc42a760ec35dfe4e9b48d269b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"56a89b730c0d5edc3ca2908f4b789b35939bf64e8f3aa5311aaa687aa80f51ac\"" Apr 17 23:45:53.194677 containerd[1454]: time="2026-04-17T23:45:53.194616806Z" level=info msg="StartContainer for \"56a89b730c0d5edc3ca2908f4b789b35939bf64e8f3aa5311aaa687aa80f51ac\"" Apr 17 23:45:53.260986 systemd[1]: Started cri-containerd-56a89b730c0d5edc3ca2908f4b789b35939bf64e8f3aa5311aaa687aa80f51ac.scope - libcontainer container 56a89b730c0d5edc3ca2908f4b789b35939bf64e8f3aa5311aaa687aa80f51ac. Apr 17 23:45:53.337888 containerd[1454]: time="2026-04-17T23:45:53.337800914Z" level=info msg="StartContainer for \"56a89b730c0d5edc3ca2908f4b789b35939bf64e8f3aa5311aaa687aa80f51ac\" returns successfully" Apr 17 23:45:53.436792 kubelet[2580]: I0417 23:45:53.436197 2580 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:45:53.436792 kubelet[2580]: I0417 23:45:53.436250 2580 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:45:53.969334 kubelet[2580]: I0417 23:45:53.968869 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:45:53.970154 kubelet[2580]: I0417 23:45:53.969699 2580 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:45:53.986787 kubelet[2580]: I0417 23:45:53.985522 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-64fd9bf59-bf5g9" podStartSLOduration=44.237294475 podStartE2EDuration="53.985498127s" podCreationTimestamp="2026-04-17 23:45:00 +0000 UTC" firstStartedPulling="2026-04-17 23:45:41.711363009 +0000 UTC m=+60.609001424" lastFinishedPulling="2026-04-17 23:45:51.459566645 +0000 UTC m=+70.357205076" observedRunningTime="2026-04-17 23:45:52.066518748 +0000 UTC m=+70.964157183" watchObservedRunningTime="2026-04-17 23:45:53.985498127 +0000 UTC m=+72.883136560" Apr 17 23:45:53.987707 kubelet[2580]: I0417 23:45:53.987434 2580 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q4qpd" podStartSLOduration=39.611608978 podStartE2EDuration="51.987419111s" podCreationTimestamp="2026-04-17 23:45:02 +0000 UTC" firstStartedPulling="2026-04-17 23:45:40.791686293 +0000 UTC m=+59.689324704" lastFinishedPulling="2026-04-17 23:45:53.167496426 +0000 UTC m=+72.065134837" observedRunningTime="2026-04-17 23:45:53.987160494 +0000 UTC m=+72.884798916" watchObservedRunningTime="2026-04-17 23:45:53.987419111 +0000 UTC m=+72.885057535" Apr 17 23:45:56.535191 systemd[1]: Started sshd@9-10.128.0.99:22-50.85.169.122:53492.service - OpenSSH per-connection server daemon (50.85.169.122:53492). Apr 17 23:45:57.248955 sshd[5732]: Accepted publickey for core from 50.85.169.122 port 53492 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:45:57.251041 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:57.258037 systemd-logind[1441]: New session 9 of user core. Apr 17 23:45:57.263042 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:45:57.829937 sshd[5732]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:57.835817 systemd[1]: sshd@9-10.128.0.99:22-50.85.169.122:53492.service: Deactivated successfully. Apr 17 23:45:57.838849 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:45:57.840599 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:45:57.842442 systemd-logind[1441]: Removed session 9. Apr 17 23:46:02.948563 systemd[1]: Started sshd@10-10.128.0.99:22-50.85.169.122:42566.service - OpenSSH per-connection server daemon (50.85.169.122:42566). Apr 17 23:46:03.630142 sshd[5798]: Accepted publickey for core from 50.85.169.122 port 42566 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:03.631073 sshd[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:03.638043 systemd-logind[1441]: New session 10 of user core. Apr 17 23:46:03.642992 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:46:04.184576 sshd[5798]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:04.192560 systemd[1]: sshd@10-10.128.0.99:22-50.85.169.122:42566.service: Deactivated successfully. Apr 17 23:46:04.195564 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:46:04.196834 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:46:04.198295 systemd-logind[1441]: Removed session 10. Apr 17 23:46:09.310211 systemd[1]: Started sshd@11-10.128.0.99:22-50.85.169.122:42582.service - OpenSSH per-connection server daemon (50.85.169.122:42582). Apr 17 23:46:10.005932 sshd[5834]: Accepted publickey for core from 50.85.169.122 port 42582 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:10.008658 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:10.015655 systemd-logind[1441]: New session 11 of user core. Apr 17 23:46:10.022019 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:46:10.571731 sshd[5834]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:10.578121 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:46:10.578900 systemd[1]: sshd@11-10.128.0.99:22-50.85.169.122:42582.service: Deactivated successfully. Apr 17 23:46:10.583130 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:46:10.585697 systemd-logind[1441]: Removed session 11. Apr 17 23:46:10.692639 systemd[1]: Started sshd@12-10.128.0.99:22-50.85.169.122:44620.service - OpenSSH per-connection server daemon (50.85.169.122:44620). Apr 17 23:46:11.368256 sshd[5875]: Accepted publickey for core from 50.85.169.122 port 44620 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:11.370690 sshd[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:11.379797 systemd-logind[1441]: New session 12 of user core. Apr 17 23:46:11.386486 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:46:11.967280 sshd[5875]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:11.972647 systemd[1]: sshd@12-10.128.0.99:22-50.85.169.122:44620.service: Deactivated successfully. Apr 17 23:46:11.975969 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:46:11.978043 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:46:11.980274 systemd-logind[1441]: Removed session 12. Apr 17 23:46:12.090958 systemd[1]: Started sshd@13-10.128.0.99:22-50.85.169.122:44628.service - OpenSSH per-connection server daemon (50.85.169.122:44628). Apr 17 23:46:12.808512 sshd[5886]: Accepted publickey for core from 50.85.169.122 port 44628 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:12.810356 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:12.817601 systemd-logind[1441]: New session 13 of user core. Apr 17 23:46:12.820997 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:46:13.384571 sshd[5886]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:13.389898 systemd[1]: sshd@13-10.128.0.99:22-50.85.169.122:44628.service: Deactivated successfully. Apr 17 23:46:13.396237 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:46:13.399603 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:46:13.401728 systemd-logind[1441]: Removed session 13. Apr 17 23:46:18.510899 systemd[1]: Started sshd@14-10.128.0.99:22-50.85.169.122:44640.service - OpenSSH per-connection server daemon (50.85.169.122:44640). Apr 17 23:46:19.222954 sshd[5921]: Accepted publickey for core from 50.85.169.122 port 44640 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:19.224928 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:19.232494 systemd-logind[1441]: New session 14 of user core. Apr 17 23:46:19.238983 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:46:19.822399 sshd[5921]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:19.833689 systemd[1]: sshd@14-10.128.0.99:22-50.85.169.122:44640.service: Deactivated successfully. Apr 17 23:46:19.839642 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:46:19.841683 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:46:19.845555 systemd-logind[1441]: Removed session 14. Apr 17 23:46:19.957322 systemd[1]: Started sshd@15-10.128.0.99:22-50.85.169.122:42044.service - OpenSSH per-connection server daemon (50.85.169.122:42044). Apr 17 23:46:20.718921 sshd[5934]: Accepted publickey for core from 50.85.169.122 port 42044 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:20.722265 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:20.735053 systemd-logind[1441]: New session 15 of user core. Apr 17 23:46:20.741009 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:46:21.420404 sshd[5934]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:21.435447 systemd[1]: sshd@15-10.128.0.99:22-50.85.169.122:42044.service: Deactivated successfully. Apr 17 23:46:21.439650 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:46:21.442140 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:46:21.445198 systemd-logind[1441]: Removed session 15. Apr 17 23:46:21.543915 systemd[1]: Started sshd@16-10.128.0.99:22-50.85.169.122:42056.service - OpenSSH per-connection server daemon (50.85.169.122:42056). Apr 17 23:46:22.226807 sshd[5965]: Accepted publickey for core from 50.85.169.122 port 42056 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:22.228005 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:22.234791 systemd-logind[1441]: New session 16 of user core. Apr 17 23:46:22.239977 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:46:23.412371 sshd[5965]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:23.417850 systemd[1]: sshd@16-10.128.0.99:22-50.85.169.122:42056.service: Deactivated successfully. Apr 17 23:46:23.420720 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:46:23.423392 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:46:23.425900 systemd-logind[1441]: Removed session 16. Apr 17 23:46:23.544268 systemd[1]: Started sshd@17-10.128.0.99:22-50.85.169.122:42072.service - OpenSSH per-connection server daemon (50.85.169.122:42072). Apr 17 23:46:24.224800 sshd[5997]: Accepted publickey for core from 50.85.169.122 port 42072 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:24.226092 sshd[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:24.232038 systemd-logind[1441]: New session 17 of user core. Apr 17 23:46:24.240991 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:46:24.925245 sshd[5997]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:24.931056 systemd[1]: sshd@17-10.128.0.99:22-50.85.169.122:42072.service: Deactivated successfully. Apr 17 23:46:24.934683 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:46:24.937548 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:46:24.939418 systemd-logind[1441]: Removed session 17. Apr 17 23:46:25.055683 systemd[1]: Started sshd@18-10.128.0.99:22-50.85.169.122:42078.service - OpenSSH per-connection server daemon (50.85.169.122:42078). Apr 17 23:46:25.734779 sshd[6008]: Accepted publickey for core from 50.85.169.122 port 42078 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:25.736712 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:25.743844 systemd-logind[1441]: New session 18 of user core. Apr 17 23:46:25.748978 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:46:26.288241 sshd[6008]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:26.294081 systemd[1]: sshd@18-10.128.0.99:22-50.85.169.122:42078.service: Deactivated successfully. Apr 17 23:46:26.297612 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:46:26.299038 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:46:26.302313 systemd-logind[1441]: Removed session 18. Apr 17 23:46:31.415524 systemd[1]: Started sshd@19-10.128.0.99:22-50.85.169.122:49550.service - OpenSSH per-connection server daemon (50.85.169.122:49550). Apr 17 23:46:32.135698 sshd[6044]: Accepted publickey for core from 50.85.169.122 port 49550 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:32.137733 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:32.144211 systemd-logind[1441]: New session 19 of user core. Apr 17 23:46:32.152052 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:46:32.705456 sshd[6044]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:32.710085 systemd[1]: sshd@19-10.128.0.99:22-50.85.169.122:49550.service: Deactivated successfully. Apr 17 23:46:32.713412 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:46:32.716296 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:46:32.718326 systemd-logind[1441]: Removed session 19. Apr 17 23:46:37.828184 systemd[1]: Started sshd@20-10.128.0.99:22-50.85.169.122:49562.service - OpenSSH per-connection server daemon (50.85.169.122:49562). Apr 17 23:46:38.119238 systemd[1]: Started sshd@21-10.128.0.99:22-185.114.206.48:44886.service - OpenSSH per-connection server daemon (185.114.206.48:44886). Apr 17 23:46:38.391794 sshd[6060]: Invalid user orangepi from 185.114.206.48 port 44886 Apr 17 23:46:38.457485 sshd[6060]: Connection closed by invalid user orangepi 185.114.206.48 port 44886 [preauth] Apr 17 23:46:38.461118 systemd[1]: sshd@21-10.128.0.99:22-185.114.206.48:44886.service: Deactivated successfully. Apr 17 23:46:38.498890 sshd[6057]: Accepted publickey for core from 50.85.169.122 port 49562 ssh2: RSA SHA256:Pmc+bTBNIj4mkiFQF5kVSsQsgLp29+aFd4ERiVF1B2I Apr 17 23:46:38.500712 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:38.507745 systemd-logind[1441]: New session 20 of user core. Apr 17 23:46:38.516000 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:46:39.047832 sshd[6057]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:39.054362 systemd[1]: sshd@20-10.128.0.99:22-50.85.169.122:49562.service: Deactivated successfully. Apr 17 23:46:39.057841 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:46:39.059400 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:46:39.061457 systemd-logind[1441]: Removed session 20.