Mar 7 01:13:39.079558 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:13:39.079591 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:13:39.079610 kernel: BIOS-provided physical RAM map: Mar 7 01:13:39.079619 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 7 01:13:39.079627 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 7 01:13:39.079635 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 7 01:13:39.079645 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 7 01:13:39.079656 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 7 01:13:39.079664 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Mar 7 01:13:39.079672 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Mar 7 01:13:39.079681 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Mar 7 01:13:39.079689 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Mar 7 01:13:39.079697 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 7 01:13:39.079705 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 7 01:13:39.079718 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 7 01:13:39.079728 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 7 01:13:39.079737 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 7 01:13:39.079746 kernel: NX (Execute Disable) protection: active Mar 7 01:13:39.079755 kernel: APIC: Static calls initialized Mar 7 01:13:39.079765 kernel: efi: EFI v2.7 by EDK II Mar 7 01:13:39.079774 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Mar 7 01:13:39.079783 kernel: SMBIOS 2.4 present. Mar 7 01:13:39.079792 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Mar 7 01:13:39.079802 kernel: Hypervisor detected: KVM Mar 7 01:13:39.079813 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:13:39.079822 kernel: kvm-clock: using sched offset of 12686123171 cycles Mar 7 01:13:39.079832 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:13:39.079842 kernel: tsc: Detected 2299.998 MHz processor Mar 7 01:13:39.079854 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:13:39.079864 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:13:39.079873 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 7 01:13:39.079883 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 7 01:13:39.079911 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:13:39.079930 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 7 01:13:39.079946 kernel: Using GB pages for direct mapping Mar 7 01:13:39.079957 kernel: Secure boot disabled Mar 7 01:13:39.079967 kernel: ACPI: Early table checksum verification disabled Mar 7 01:13:39.079977 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 7 01:13:39.079986 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 7 01:13:39.079996 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 7 01:13:39.080010 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 7 01:13:39.080022 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 7 01:13:39.080032 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Mar 7 01:13:39.080042 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 7 01:13:39.080052 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 7 01:13:39.080062 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 7 01:13:39.080072 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 7 01:13:39.080085 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 7 01:13:39.080095 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 7 01:13:39.080105 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 7 01:13:39.080115 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 7 01:13:39.080125 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 7 01:13:39.080135 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 7 01:13:39.080145 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 7 01:13:39.080154 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 7 01:13:39.080164 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 7 01:13:39.080177 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 7 01:13:39.080187 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:13:39.080197 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:13:39.080206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 7 01:13:39.080217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 7 01:13:39.080226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 7 01:13:39.080236 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 7 01:13:39.080247 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 7 01:13:39.080257 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Mar 7 01:13:39.080270 kernel: Zone ranges: Mar 7 01:13:39.080280 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:13:39.080290 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:13:39.080299 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 7 01:13:39.080309 kernel: Movable zone start for each node Mar 7 01:13:39.080319 kernel: Early memory node ranges Mar 7 01:13:39.080329 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 7 01:13:39.080339 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 7 01:13:39.080349 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Mar 7 01:13:39.080359 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 7 01:13:39.080371 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 7 01:13:39.080381 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 7 01:13:39.080391 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:13:39.080401 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 7 01:13:39.080411 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 7 01:13:39.080421 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 7 01:13:39.080431 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 7 01:13:39.080441 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 7 01:13:39.080450 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:13:39.080463 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:13:39.080473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:13:39.080483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:13:39.080493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:13:39.080503 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:13:39.080513 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:13:39.080523 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:13:39.080532 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 7 01:13:39.080542 kernel: Booting paravirtualized kernel on KVM Mar 7 01:13:39.080555 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:13:39.080565 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:13:39.080575 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:13:39.080585 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:13:39.080600 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:13:39.080610 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:13:39.080620 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:13:39.080631 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:13:39.080645 kernel: random: crng init done Mar 7 01:13:39.080654 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:13:39.080665 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:13:39.080675 kernel: Fallback order for Node 0: 0 Mar 7 01:13:39.080685 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Mar 7 01:13:39.080694 kernel: Policy zone: Normal Mar 7 01:13:39.080704 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:13:39.080714 kernel: software IO TLB: area num 2. Mar 7 01:13:39.080724 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347148K reserved, 0K cma-reserved) Mar 7 01:13:39.080737 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:13:39.080747 kernel: Kernel/User page tables isolation: enabled Mar 7 01:13:39.080757 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:13:39.080767 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:13:39.080776 kernel: Dynamic Preempt: voluntary Mar 7 01:13:39.080786 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:13:39.080797 kernel: rcu: RCU event tracing is enabled. Mar 7 01:13:39.080807 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:13:39.080830 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:13:39.080841 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:13:39.080854 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:13:39.080864 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:13:39.080877 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:13:39.080888 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:13:39.080936 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:13:39.080953 kernel: Console: colour dummy device 80x25 Mar 7 01:13:39.080974 kernel: printk: console [ttyS0] enabled Mar 7 01:13:39.080991 kernel: ACPI: Core revision 20230628 Mar 7 01:13:39.081010 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:13:39.081028 kernel: x2apic enabled Mar 7 01:13:39.081048 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:13:39.081065 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 7 01:13:39.081084 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 7 01:13:39.081102 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 7 01:13:39.081122 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 7 01:13:39.081142 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 7 01:13:39.081164 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:13:39.081182 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 7 01:13:39.081200 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 7 01:13:39.081218 kernel: Spectre V2 : Mitigation: IBRS Mar 7 01:13:39.081235 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:13:39.081257 kernel: RETBleed: Mitigation: IBRS Mar 7 01:13:39.081278 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:13:39.081295 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 7 01:13:39.081317 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:13:39.081335 kernel: MDS: Mitigation: Clear CPU buffers Mar 7 01:13:39.081354 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:13:39.081373 kernel: active return thunk: its_return_thunk Mar 7 01:13:39.081392 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:13:39.081411 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:13:39.081429 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:13:39.081448 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:13:39.081468 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:13:39.081492 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 7 01:13:39.081512 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:13:39.081533 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:13:39.081552 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:13:39.081572 kernel: landlock: Up and running. Mar 7 01:13:39.081589 kernel: SELinux: Initializing. Mar 7 01:13:39.081617 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:13:39.081635 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:13:39.081655 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 7 01:13:39.081678 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:13:39.081698 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:13:39.081718 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:13:39.081738 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 7 01:13:39.081754 kernel: signal: max sigframe size: 1776 Mar 7 01:13:39.081771 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:13:39.081790 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:13:39.081808 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:13:39.081827 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:13:39.081849 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:13:39.081869 kernel: .... node #0, CPUs: #1 Mar 7 01:13:39.081910 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 7 01:13:39.081933 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:13:39.081953 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:13:39.081972 kernel: smpboot: Max logical packages: 1 Mar 7 01:13:39.081992 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 7 01:13:39.082010 kernel: devtmpfs: initialized Mar 7 01:13:39.082033 kernel: x86/mm: Memory block size: 128MB Mar 7 01:13:39.082051 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 7 01:13:39.082070 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:13:39.082088 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:13:39.082107 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:13:39.082127 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:13:39.082146 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:13:39.082165 kernel: audit: type=2000 audit(1772846017.850:1): state=initialized audit_enabled=0 res=1 Mar 7 01:13:39.082184 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:13:39.082208 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:13:39.082227 kernel: cpuidle: using governor menu Mar 7 01:13:39.082246 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:13:39.082265 kernel: dca service started, version 1.12.1 Mar 7 01:13:39.082284 kernel: PCI: Using configuration type 1 for base access Mar 7 01:13:39.082304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:13:39.082323 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:13:39.082342 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:13:39.082361 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:13:39.082385 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:13:39.082404 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:13:39.082424 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:13:39.082443 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:13:39.082462 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 7 01:13:39.082481 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:13:39.082499 kernel: ACPI: Interpreter enabled Mar 7 01:13:39.082525 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:13:39.082544 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:13:39.082563 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:13:39.082588 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:13:39.082616 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 7 01:13:39.082636 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:13:39.082934 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:13:39.083155 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 7 01:13:39.083350 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 7 01:13:39.083376 kernel: PCI host bridge to bus 0000:00 Mar 7 01:13:39.083582 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:13:39.083774 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:13:39.083971 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:13:39.084142 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 7 01:13:39.084307 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:13:39.084512 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 7 01:13:39.084732 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 7 01:13:39.084955 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 7 01:13:39.085156 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 7 01:13:39.085355 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 7 01:13:39.085987 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:13:39.086205 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 7 01:13:39.086422 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:13:39.086644 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:13:39.086853 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 7 01:13:39.087620 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:13:39.087838 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 7 01:13:39.088133 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 7 01:13:39.088163 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:13:39.088183 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:13:39.088207 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:13:39.088225 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:13:39.088244 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 7 01:13:39.088265 kernel: iommu: Default domain type: Translated Mar 7 01:13:39.088287 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:13:39.088305 kernel: efivars: Registered efivars operations Mar 7 01:13:39.088322 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:13:39.088340 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:13:39.088358 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 7 01:13:39.088382 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 7 01:13:39.088401 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 7 01:13:39.088420 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 7 01:13:39.088439 kernel: vgaarb: loaded Mar 7 01:13:39.088459 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:13:39.088479 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:13:39.088499 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:13:39.088519 kernel: pnp: PnP ACPI init Mar 7 01:13:39.088539 kernel: pnp: PnP ACPI: found 7 devices Mar 7 01:13:39.088563 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:13:39.088582 kernel: NET: Registered PF_INET protocol family Mar 7 01:13:39.088612 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:13:39.088631 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:13:39.088650 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:13:39.088671 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:13:39.088691 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:13:39.088710 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:13:39.088734 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:13:39.088751 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:13:39.088769 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:13:39.088787 kernel: NET: Registered PF_XDP protocol family Mar 7 01:13:39.089069 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:13:39.089258 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:13:39.089440 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:13:39.089637 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 7 01:13:39.089852 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 7 01:13:39.089879 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:13:39.089913 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:13:39.089932 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 7 01:13:39.089951 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:13:39.089971 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 7 01:13:39.089991 kernel: clocksource: Switched to clocksource tsc Mar 7 01:13:39.090012 kernel: Initialise system trusted keyrings Mar 7 01:13:39.090037 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:13:39.090057 kernel: Key type asymmetric registered Mar 7 01:13:39.090076 kernel: Asymmetric key parser 'x509' registered Mar 7 01:13:39.090095 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:13:39.090115 kernel: io scheduler mq-deadline registered Mar 7 01:13:39.090135 kernel: io scheduler kyber registered Mar 7 01:13:39.090155 kernel: io scheduler bfq registered Mar 7 01:13:39.090174 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:13:39.090195 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 7 01:13:39.090421 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 7 01:13:39.090446 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 7 01:13:39.090666 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 7 01:13:39.090693 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 7 01:13:39.090883 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 7 01:13:39.090933 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:13:39.090952 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:13:39.090969 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:13:39.090987 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 7 01:13:39.091011 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 7 01:13:39.091208 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 7 01:13:39.091233 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:13:39.091252 kernel: i8042: Warning: Keylock active Mar 7 01:13:39.091271 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:13:39.091289 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:13:39.091485 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 7 01:13:39.091675 kernel: rtc_cmos 00:00: registered as rtc0 Mar 7 01:13:39.091858 kernel: rtc_cmos 00:00: setting system clock to 2026-03-07T01:13:38 UTC (1772846018) Mar 7 01:13:39.092084 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 7 01:13:39.092111 kernel: intel_pstate: CPU model not supported Mar 7 01:13:39.092129 kernel: pstore: Using crash dump compression: deflate Mar 7 01:13:39.092149 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:13:39.092167 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:13:39.092184 kernel: Segment Routing with IPv6 Mar 7 01:13:39.092203 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:13:39.092229 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:13:39.092247 kernel: Key type dns_resolver registered Mar 7 01:13:39.092266 kernel: IPI shorthand broadcast: enabled Mar 7 01:13:39.092282 kernel: sched_clock: Marking stable (862004783, 132318920)->(1007408309, -13084606) Mar 7 01:13:39.092299 kernel: registered taskstats version 1 Mar 7 01:13:39.092317 kernel: Loading compiled-in X.509 certificates Mar 7 01:13:39.092337 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:13:39.092356 kernel: Key type .fscrypt registered Mar 7 01:13:39.092375 kernel: Key type fscrypt-provisioning registered Mar 7 01:13:39.092398 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:13:39.092417 kernel: ima: No architecture policies found Mar 7 01:13:39.092435 kernel: clk: Disabling unused clocks Mar 7 01:13:39.092454 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:13:39.092472 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:13:39.092490 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:13:39.092508 kernel: Run /init as init process Mar 7 01:13:39.092525 kernel: with arguments: Mar 7 01:13:39.092543 kernel: /init Mar 7 01:13:39.092565 kernel: with environment: Mar 7 01:13:39.092584 kernel: HOME=/ Mar 7 01:13:39.092612 kernel: TERM=linux Mar 7 01:13:39.092631 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:13:39.092654 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:13:39.092677 systemd[1]: Detected virtualization google. Mar 7 01:13:39.092696 systemd[1]: Detected architecture x86-64. Mar 7 01:13:39.092719 systemd[1]: Running in initrd. Mar 7 01:13:39.092738 systemd[1]: No hostname configured, using default hostname. Mar 7 01:13:39.092757 systemd[1]: Hostname set to . Mar 7 01:13:39.092778 systemd[1]: Initializing machine ID from random generator. Mar 7 01:13:39.092797 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:13:39.092816 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:13:39.092836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:13:39.092856 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:13:39.092880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:13:39.092942 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:13:39.092963 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:13:39.092985 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:13:39.093004 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:13:39.093024 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:13:39.093043 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:13:39.093066 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:13:39.093087 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:13:39.093127 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:13:39.093151 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:13:39.093171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:13:39.093192 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:13:39.093212 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:13:39.093236 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:13:39.093256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:13:39.093277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:13:39.093298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:13:39.093318 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:13:39.093339 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:13:39.093359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:13:39.093379 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:13:39.093403 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:13:39.093423 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:13:39.093443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:13:39.093464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:13:39.093520 systemd-journald[184]: Collecting audit messages is disabled. Mar 7 01:13:39.093578 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:13:39.093616 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:13:39.093638 systemd-journald[184]: Journal started Mar 7 01:13:39.093678 systemd-journald[184]: Runtime Journal (/run/log/journal/993e848e5a444a8eb6fa5374f418db72) is 8.0M, max 148.7M, 140.7M free. Mar 7 01:13:39.102915 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:13:39.102987 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:13:39.115258 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:13:39.123427 systemd-modules-load[185]: Inserted module 'overlay' Mar 7 01:13:39.124106 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:13:39.133265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:13:39.141361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:13:39.160085 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:13:39.169104 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:13:39.176060 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:13:39.169663 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:13:39.181031 kernel: Bridge firewalling registered Mar 7 01:13:39.177265 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 7 01:13:39.182234 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:13:39.197283 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:13:39.214264 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:13:39.221659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:13:39.226683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:13:39.236132 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:13:39.248091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:13:39.268757 dracut-cmdline[217]: dracut-dracut-053 Mar 7 01:13:39.273483 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:13:39.298770 systemd-resolved[218]: Positive Trust Anchors: Mar 7 01:13:39.298790 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:13:39.298854 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:13:39.306186 systemd-resolved[218]: Defaulting to hostname 'linux'. Mar 7 01:13:39.307940 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:13:39.314648 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:13:39.382949 kernel: SCSI subsystem initialized Mar 7 01:13:39.393920 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:13:39.406934 kernel: iscsi: registered transport (tcp) Mar 7 01:13:39.431421 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:13:39.431505 kernel: QLogic iSCSI HBA Driver Mar 7 01:13:39.486225 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:13:39.503201 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:13:39.533930 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:13:39.534018 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:13:39.534043 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:13:39.579939 kernel: raid6: avx2x4 gen() 18168 MB/s Mar 7 01:13:39.596922 kernel: raid6: avx2x2 gen() 17627 MB/s Mar 7 01:13:39.614401 kernel: raid6: avx2x1 gen() 13456 MB/s Mar 7 01:13:39.614488 kernel: raid6: using algorithm avx2x4 gen() 18168 MB/s Mar 7 01:13:39.632554 kernel: raid6: .... xor() 7450 MB/s, rmw enabled Mar 7 01:13:39.632621 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:13:39.655935 kernel: xor: automatically using best checksumming function avx Mar 7 01:13:39.831010 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:13:39.844405 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:13:39.850261 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:13:39.878015 systemd-udevd[401]: Using default interface naming scheme 'v255'. Mar 7 01:13:39.885048 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:13:39.896170 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:13:39.928853 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Mar 7 01:13:39.968410 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:13:39.973183 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:13:40.066810 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:13:40.081106 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:13:40.120346 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:13:40.132392 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:13:40.142010 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:13:40.147214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:13:40.167032 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:13:40.207964 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:13:40.211874 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:13:40.241270 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:13:40.241416 kernel: blk-mq: reduced tag depth to 10240 Mar 7 01:13:40.249622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:13:40.258955 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 7 01:13:40.259314 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:13:40.259347 kernel: AES CTR mode by8 optimization enabled Mar 7 01:13:40.249944 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:13:40.264764 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:13:40.285208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:13:40.285820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:13:40.293121 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:13:40.304424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:13:40.347806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:13:40.360525 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Mar 7 01:13:40.360997 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 7 01:13:40.364007 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 7 01:13:40.364317 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 7 01:13:40.364573 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:13:40.366135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:13:40.376328 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:13:40.376407 kernel: GPT:17805311 != 33554431 Mar 7 01:13:40.376442 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:13:40.376466 kernel: GPT:17805311 != 33554431 Mar 7 01:13:40.376494 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:13:40.376518 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:40.378921 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 7 01:13:40.387821 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:13:40.439918 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (454) Mar 7 01:13:40.439994 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (458) Mar 7 01:13:40.439847 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 7 01:13:40.468528 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 7 01:13:40.476415 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 7 01:13:40.483328 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 7 01:13:40.483597 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 7 01:13:40.496292 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:13:40.514533 disk-uuid[552]: Primary Header is updated. Mar 7 01:13:40.514533 disk-uuid[552]: Secondary Entries is updated. Mar 7 01:13:40.514533 disk-uuid[552]: Secondary Header is updated. Mar 7 01:13:40.526914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:40.536012 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:40.549919 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:41.560013 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:41.560692 disk-uuid[553]: The operation has completed successfully. Mar 7 01:13:41.630479 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:13:41.630630 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:13:41.661123 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:13:41.691312 sh[570]: Success Mar 7 01:13:41.715952 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:13:41.800480 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:13:41.807945 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:13:41.836471 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:13:41.884297 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:13:41.884404 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:13:41.884431 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:13:41.900574 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:13:41.900636 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:13:41.934932 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:13:41.941913 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:13:41.942915 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:13:41.947131 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:13:42.026087 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:42.026135 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:13:42.026163 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:13:42.026187 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:13:42.026212 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:13:42.004632 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:13:42.050968 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:42.065672 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:13:42.093159 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:13:42.133045 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:13:42.140248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:13:42.220317 systemd-networkd[752]: lo: Link UP Mar 7 01:13:42.220331 systemd-networkd[752]: lo: Gained carrier Mar 7 01:13:42.222999 systemd-networkd[752]: Enumeration completed Mar 7 01:13:42.223576 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:13:42.223584 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:13:42.224387 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:13:42.226408 systemd-networkd[752]: eth0: Link UP Mar 7 01:13:42.226415 systemd-networkd[752]: eth0: Gained carrier Mar 7 01:13:42.296117 ignition[709]: Ignition 2.19.0 Mar 7 01:13:42.226428 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:13:42.296128 ignition[709]: Stage: fetch-offline Mar 7 01:13:42.261011 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:13:42.296174 ignition[709]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:42.261035 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.18/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 7 01:13:42.296186 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:42.299499 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:13:42.296310 ignition[709]: parsed url from cmdline: "" Mar 7 01:13:42.307488 systemd[1]: Reached target network.target - Network. Mar 7 01:13:42.296317 ignition[709]: no config URL provided Mar 7 01:13:42.338099 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:13:42.296326 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:13:42.393597 unknown[762]: fetched base config from "system" Mar 7 01:13:42.296340 ignition[709]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:13:42.393607 unknown[762]: fetched base config from "system" Mar 7 01:13:42.296349 ignition[709]: failed to fetch config: resource requires networking Mar 7 01:13:42.393614 unknown[762]: fetched user config from "gcp" Mar 7 01:13:42.296641 ignition[709]: Ignition finished successfully Mar 7 01:13:42.397288 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:13:42.383876 ignition[762]: Ignition 2.19.0 Mar 7 01:13:42.417125 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:13:42.383887 ignition[762]: Stage: fetch Mar 7 01:13:42.464954 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:13:42.384148 ignition[762]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:42.489126 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:13:42.384167 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:42.517632 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:13:42.384342 ignition[762]: parsed url from cmdline: "" Mar 7 01:13:42.542054 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:13:42.384350 ignition[762]: no config URL provided Mar 7 01:13:42.551311 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:13:42.384360 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:13:42.566345 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:13:42.384371 ignition[762]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:13:42.583320 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:13:42.384401 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 7 01:13:42.600301 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:13:42.387981 ignition[762]: GET result: OK Mar 7 01:13:42.621264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:13:42.388055 ignition[762]: parsing config with SHA512: ad635e12abec85915ffff05f03636a97a12d3584b108630711ac5dde80d3b5a6e7d2ad569bfd965589520126a68d29214085c949268e33db4c332fa7c7bf3d71 Mar 7 01:13:42.395229 ignition[762]: fetch: fetch complete Mar 7 01:13:42.395240 ignition[762]: fetch: fetch passed Mar 7 01:13:42.395322 ignition[762]: Ignition finished successfully Mar 7 01:13:42.461653 ignition[767]: Ignition 2.19.0 Mar 7 01:13:42.461663 ignition[767]: Stage: kargs Mar 7 01:13:42.461869 ignition[767]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:42.461882 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:42.463751 ignition[767]: kargs: kargs passed Mar 7 01:13:42.463837 ignition[767]: Ignition finished successfully Mar 7 01:13:42.510040 ignition[774]: Ignition 2.19.0 Mar 7 01:13:42.510053 ignition[774]: Stage: disks Mar 7 01:13:42.510382 ignition[774]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:42.510401 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:42.512438 ignition[774]: disks: disks passed Mar 7 01:13:42.512524 ignition[774]: Ignition finished successfully Mar 7 01:13:42.685829 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 01:13:42.855007 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:13:42.860057 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:13:43.013501 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:13:43.013392 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:13:43.022881 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:13:43.052052 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:13:43.099824 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Mar 7 01:13:43.099868 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:43.099929 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:13:43.099955 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:13:43.093421 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:13:43.126931 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:13:43.127029 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:13:43.127477 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:13:43.127576 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:13:43.127624 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:13:43.153403 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:13:43.170469 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:13:43.201152 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:13:43.342563 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:13:43.353083 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:13:43.363022 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:13:43.373067 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:13:43.518856 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:13:43.526233 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:13:43.560948 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:43.567164 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:13:43.577179 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:13:43.606705 ignition[902]: INFO : Ignition 2.19.0 Mar 7 01:13:43.606705 ignition[902]: INFO : Stage: mount Mar 7 01:13:43.621055 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:43.621055 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:43.621055 ignition[902]: INFO : mount: mount passed Mar 7 01:13:43.621055 ignition[902]: INFO : Ignition finished successfully Mar 7 01:13:43.610100 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:13:43.643431 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:13:43.671029 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:13:43.705139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:13:43.736927 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Mar 7 01:13:43.755118 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:43.755211 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:13:43.755239 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:13:43.777145 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:13:43.777251 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:13:43.780235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:13:43.815217 ignition[931]: INFO : Ignition 2.19.0 Mar 7 01:13:43.815217 ignition[931]: INFO : Stage: files Mar 7 01:13:43.830023 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:43.830023 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:43.830023 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:13:43.830023 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:13:43.830023 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:13:43.830023 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:13:43.830023 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:13:43.830023 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:13:43.830023 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:13:43.830023 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:13:43.830023 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:13:43.830023 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:13:43.823803 unknown[931]: wrote ssh authorized keys file for user: core Mar 7 01:13:43.997038 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:13:44.159293 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:13:44.181168 systemd-networkd[752]: eth0: Gained IPv6LL Mar 7 01:13:44.590648 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:13:45.246244 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:13:45.246244 ignition[931]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:13:45.283226 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:13:45.283226 ignition[931]: INFO : files: files passed Mar 7 01:13:45.283226 ignition[931]: INFO : Ignition finished successfully Mar 7 01:13:45.252926 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:13:45.271146 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:13:45.300002 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:13:45.391687 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:13:45.547132 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:13:45.547132 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:13:45.391811 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:13:45.614083 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:13:45.403571 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:13:45.424535 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:13:45.452151 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:13:45.543835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:13:45.544140 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:13:45.558521 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:13:45.572334 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:13:45.604286 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:13:45.611216 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:13:45.661086 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:13:45.686148 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:13:45.729554 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:13:45.742267 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:13:45.766339 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:13:45.785298 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:13:45.785531 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:13:45.819358 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:13:45.839300 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:13:45.857339 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:13:45.875358 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:13:45.894334 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:13:45.917288 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:13:45.937246 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:13:45.956348 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:13:45.976333 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:13:45.996287 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:13:46.014185 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:13:46.014359 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:13:46.045385 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:13:46.065260 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:13:46.086212 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:13:46.086459 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:13:46.104208 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:13:46.104380 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:13:46.232086 ignition[983]: INFO : Ignition 2.19.0 Mar 7 01:13:46.232086 ignition[983]: INFO : Stage: umount Mar 7 01:13:46.232086 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:46.232086 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:46.232086 ignition[983]: INFO : umount: umount passed Mar 7 01:13:46.232086 ignition[983]: INFO : Ignition finished successfully Mar 7 01:13:46.132362 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:13:46.132594 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:13:46.154340 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:13:46.154498 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:13:46.182163 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:13:46.211304 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:13:46.222096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:13:46.222374 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:13:46.244355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:13:46.244575 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:13:46.260559 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:13:46.261666 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:13:46.261782 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:13:46.274574 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:13:46.274693 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:13:46.292590 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:13:46.292724 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:13:46.312266 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:13:46.312362 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:13:46.333162 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:13:46.333251 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:13:46.350137 systemd[1]: Stopped target network.target - Network. Mar 7 01:13:46.365064 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:13:46.365197 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:13:46.386139 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:13:46.404083 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:13:46.408048 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:13:46.423067 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:13:46.438076 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:13:46.455128 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:13:46.455233 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:13:46.473143 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:13:46.473241 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:13:46.493146 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:13:46.493262 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:13:46.513163 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:13:46.513259 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:13:46.533172 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:13:46.533275 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:13:46.553504 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:13:46.559983 systemd-networkd[752]: eth0: DHCPv6 lease lost Mar 7 01:13:46.580282 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:13:46.598749 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:13:46.598918 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:13:46.608655 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:13:46.608804 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:13:46.634801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:13:46.634943 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:13:46.647993 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:13:46.648064 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:13:46.686079 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:13:46.696251 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:13:46.696344 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:13:47.206103 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 7 01:13:46.722303 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:13:46.722396 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:13:46.740303 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:13:46.740389 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:13:46.760251 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:13:46.760333 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:13:46.770494 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:13:46.788803 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:13:46.789017 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:13:46.824392 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:13:46.824462 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:13:46.831285 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:13:46.831335 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:13:46.848273 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:13:46.848367 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:13:46.889052 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:13:46.889183 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:13:46.907275 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:13:46.907364 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:13:46.941320 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:13:46.954265 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:13:46.954353 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:13:46.990302 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:13:46.990390 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:13:47.000415 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:13:47.000486 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:13:47.032233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:13:47.032315 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:13:47.039808 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:13:47.039967 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:13:47.074592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:13:47.074713 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:13:47.085734 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:13:47.126159 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:13:47.159709 systemd[1]: Switching root. Mar 7 01:13:47.555073 systemd-journald[184]: Journal stopped Mar 7 01:13:39.079558 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:13:39.079591 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:13:39.079610 kernel: BIOS-provided physical RAM map: Mar 7 01:13:39.079619 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 7 01:13:39.079627 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 7 01:13:39.079635 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 7 01:13:39.079645 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 7 01:13:39.079656 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 7 01:13:39.079664 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Mar 7 01:13:39.079672 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Mar 7 01:13:39.079681 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Mar 7 01:13:39.079689 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Mar 7 01:13:39.079697 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 7 01:13:39.079705 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 7 01:13:39.079718 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 7 01:13:39.079728 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 7 01:13:39.079737 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 7 01:13:39.079746 kernel: NX (Execute Disable) protection: active Mar 7 01:13:39.079755 kernel: APIC: Static calls initialized Mar 7 01:13:39.079765 kernel: efi: EFI v2.7 by EDK II Mar 7 01:13:39.079774 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Mar 7 01:13:39.079783 kernel: SMBIOS 2.4 present. Mar 7 01:13:39.079792 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Mar 7 01:13:39.079802 kernel: Hypervisor detected: KVM Mar 7 01:13:39.079813 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:13:39.079822 kernel: kvm-clock: using sched offset of 12686123171 cycles Mar 7 01:13:39.079832 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:13:39.079842 kernel: tsc: Detected 2299.998 MHz processor Mar 7 01:13:39.079854 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:13:39.079864 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:13:39.079873 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 7 01:13:39.079883 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 7 01:13:39.079911 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:13:39.079930 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 7 01:13:39.079946 kernel: Using GB pages for direct mapping Mar 7 01:13:39.079957 kernel: Secure boot disabled Mar 7 01:13:39.079967 kernel: ACPI: Early table checksum verification disabled Mar 7 01:13:39.079977 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 7 01:13:39.079986 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 7 01:13:39.079996 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 7 01:13:39.080010 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 7 01:13:39.080022 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 7 01:13:39.080032 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Mar 7 01:13:39.080042 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 7 01:13:39.080052 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 7 01:13:39.080062 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 7 01:13:39.080072 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 7 01:13:39.080085 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 7 01:13:39.080095 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 7 01:13:39.080105 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 7 01:13:39.080115 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 7 01:13:39.080125 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 7 01:13:39.080135 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 7 01:13:39.080145 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 7 01:13:39.080154 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 7 01:13:39.080164 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 7 01:13:39.080177 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 7 01:13:39.080187 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:13:39.080197 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:13:39.080206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 7 01:13:39.080217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 7 01:13:39.080226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 7 01:13:39.080236 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 7 01:13:39.080247 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 7 01:13:39.080257 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Mar 7 01:13:39.080270 kernel: Zone ranges: Mar 7 01:13:39.080280 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:13:39.080290 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:13:39.080299 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 7 01:13:39.080309 kernel: Movable zone start for each node Mar 7 01:13:39.080319 kernel: Early memory node ranges Mar 7 01:13:39.080329 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 7 01:13:39.080339 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 7 01:13:39.080349 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Mar 7 01:13:39.080359 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 7 01:13:39.080371 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 7 01:13:39.080381 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 7 01:13:39.080391 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:13:39.080401 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 7 01:13:39.080411 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 7 01:13:39.080421 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 7 01:13:39.080431 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 7 01:13:39.080441 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 7 01:13:39.080450 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:13:39.080463 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:13:39.080473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:13:39.080483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:13:39.080493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:13:39.080503 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:13:39.080513 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:13:39.080523 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:13:39.080532 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 7 01:13:39.080542 kernel: Booting paravirtualized kernel on KVM Mar 7 01:13:39.080555 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:13:39.080565 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:13:39.080575 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:13:39.080585 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:13:39.080600 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:13:39.080610 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:13:39.080620 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:13:39.080631 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:13:39.080645 kernel: random: crng init done Mar 7 01:13:39.080654 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:13:39.080665 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:13:39.080675 kernel: Fallback order for Node 0: 0 Mar 7 01:13:39.080685 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Mar 7 01:13:39.080694 kernel: Policy zone: Normal Mar 7 01:13:39.080704 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:13:39.080714 kernel: software IO TLB: area num 2. Mar 7 01:13:39.080724 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347148K reserved, 0K cma-reserved) Mar 7 01:13:39.080737 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:13:39.080747 kernel: Kernel/User page tables isolation: enabled Mar 7 01:13:39.080757 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:13:39.080767 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:13:39.080776 kernel: Dynamic Preempt: voluntary Mar 7 01:13:39.080786 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:13:39.080797 kernel: rcu: RCU event tracing is enabled. Mar 7 01:13:39.080807 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:13:39.080830 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:13:39.080841 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:13:39.080854 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:13:39.080864 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:13:39.080877 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:13:39.080888 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:13:39.080936 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:13:39.080953 kernel: Console: colour dummy device 80x25 Mar 7 01:13:39.080974 kernel: printk: console [ttyS0] enabled Mar 7 01:13:39.080991 kernel: ACPI: Core revision 20230628 Mar 7 01:13:39.081010 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:13:39.081028 kernel: x2apic enabled Mar 7 01:13:39.081048 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:13:39.081065 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 7 01:13:39.081084 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 7 01:13:39.081102 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 7 01:13:39.081122 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 7 01:13:39.081142 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 7 01:13:39.081164 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:13:39.081182 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 7 01:13:39.081200 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 7 01:13:39.081218 kernel: Spectre V2 : Mitigation: IBRS Mar 7 01:13:39.081235 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:13:39.081257 kernel: RETBleed: Mitigation: IBRS Mar 7 01:13:39.081278 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 7 01:13:39.081295 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 7 01:13:39.081317 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 7 01:13:39.081335 kernel: MDS: Mitigation: Clear CPU buffers Mar 7 01:13:39.081354 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:13:39.081373 kernel: active return thunk: its_return_thunk Mar 7 01:13:39.081392 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:13:39.081411 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:13:39.081429 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:13:39.081448 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:13:39.081468 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:13:39.081492 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 7 01:13:39.081512 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:13:39.081533 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:13:39.081552 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:13:39.081572 kernel: landlock: Up and running. Mar 7 01:13:39.081589 kernel: SELinux: Initializing. Mar 7 01:13:39.081617 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:13:39.081635 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:13:39.081655 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 7 01:13:39.081678 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:13:39.081698 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:13:39.081718 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:13:39.081738 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 7 01:13:39.081754 kernel: signal: max sigframe size: 1776 Mar 7 01:13:39.081771 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:13:39.081790 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:13:39.081808 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:13:39.081827 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:13:39.081849 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:13:39.081869 kernel: .... node #0, CPUs: #1 Mar 7 01:13:39.081910 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 7 01:13:39.081933 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:13:39.081953 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:13:39.081972 kernel: smpboot: Max logical packages: 1 Mar 7 01:13:39.081992 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 7 01:13:39.082010 kernel: devtmpfs: initialized Mar 7 01:13:39.082033 kernel: x86/mm: Memory block size: 128MB Mar 7 01:13:39.082051 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 7 01:13:39.082070 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:13:39.082088 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:13:39.082107 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:13:39.082127 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:13:39.082146 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:13:39.082165 kernel: audit: type=2000 audit(1772846017.850:1): state=initialized audit_enabled=0 res=1 Mar 7 01:13:39.082184 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:13:39.082208 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:13:39.082227 kernel: cpuidle: using governor menu Mar 7 01:13:39.082246 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:13:39.082265 kernel: dca service started, version 1.12.1 Mar 7 01:13:39.082284 kernel: PCI: Using configuration type 1 for base access Mar 7 01:13:39.082304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:13:39.082323 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:13:39.082342 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:13:39.082361 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:13:39.082385 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:13:39.082404 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:13:39.082424 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:13:39.082443 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:13:39.082462 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 7 01:13:39.082481 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:13:39.082499 kernel: ACPI: Interpreter enabled Mar 7 01:13:39.082525 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:13:39.082544 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:13:39.082563 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:13:39.082588 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:13:39.082616 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 7 01:13:39.082636 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:13:39.082934 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:13:39.083155 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 7 01:13:39.083350 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 7 01:13:39.083376 kernel: PCI host bridge to bus 0000:00 Mar 7 01:13:39.083582 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:13:39.083774 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:13:39.083971 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:13:39.084142 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 7 01:13:39.084307 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:13:39.084512 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 7 01:13:39.084732 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 7 01:13:39.084955 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 7 01:13:39.085156 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 7 01:13:39.085355 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 7 01:13:39.085987 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 7 01:13:39.086205 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 7 01:13:39.086422 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:13:39.086644 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 7 01:13:39.086853 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 7 01:13:39.087620 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:13:39.087838 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 7 01:13:39.088133 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 7 01:13:39.088163 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:13:39.088183 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:13:39.088207 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:13:39.088225 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:13:39.088244 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 7 01:13:39.088265 kernel: iommu: Default domain type: Translated Mar 7 01:13:39.088287 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:13:39.088305 kernel: efivars: Registered efivars operations Mar 7 01:13:39.088322 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:13:39.088340 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:13:39.088358 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 7 01:13:39.088382 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 7 01:13:39.088401 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 7 01:13:39.088420 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 7 01:13:39.088439 kernel: vgaarb: loaded Mar 7 01:13:39.088459 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:13:39.088479 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:13:39.088499 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:13:39.088519 kernel: pnp: PnP ACPI init Mar 7 01:13:39.088539 kernel: pnp: PnP ACPI: found 7 devices Mar 7 01:13:39.088563 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:13:39.088582 kernel: NET: Registered PF_INET protocol family Mar 7 01:13:39.088612 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:13:39.088631 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:13:39.088650 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:13:39.088671 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:13:39.088691 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:13:39.088710 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:13:39.088734 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:13:39.088751 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:13:39.088769 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:13:39.088787 kernel: NET: Registered PF_XDP protocol family Mar 7 01:13:39.089069 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:13:39.089258 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:13:39.089440 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:13:39.089637 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 7 01:13:39.089852 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 7 01:13:39.089879 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:13:39.089913 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:13:39.089932 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 7 01:13:39.089951 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:13:39.089971 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 7 01:13:39.089991 kernel: clocksource: Switched to clocksource tsc Mar 7 01:13:39.090012 kernel: Initialise system trusted keyrings Mar 7 01:13:39.090037 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:13:39.090057 kernel: Key type asymmetric registered Mar 7 01:13:39.090076 kernel: Asymmetric key parser 'x509' registered Mar 7 01:13:39.090095 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:13:39.090115 kernel: io scheduler mq-deadline registered Mar 7 01:13:39.090135 kernel: io scheduler kyber registered Mar 7 01:13:39.090155 kernel: io scheduler bfq registered Mar 7 01:13:39.090174 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:13:39.090195 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 7 01:13:39.090421 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 7 01:13:39.090446 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 7 01:13:39.090666 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 7 01:13:39.090693 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 7 01:13:39.090883 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 7 01:13:39.090933 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:13:39.090952 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:13:39.090969 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:13:39.090987 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 7 01:13:39.091011 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 7 01:13:39.091208 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 7 01:13:39.091233 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:13:39.091252 kernel: i8042: Warning: Keylock active Mar 7 01:13:39.091271 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:13:39.091289 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:13:39.091485 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 7 01:13:39.091675 kernel: rtc_cmos 00:00: registered as rtc0 Mar 7 01:13:39.091858 kernel: rtc_cmos 00:00: setting system clock to 2026-03-07T01:13:38 UTC (1772846018) Mar 7 01:13:39.092084 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 7 01:13:39.092111 kernel: intel_pstate: CPU model not supported Mar 7 01:13:39.092129 kernel: pstore: Using crash dump compression: deflate Mar 7 01:13:39.092149 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:13:39.092167 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:13:39.092184 kernel: Segment Routing with IPv6 Mar 7 01:13:39.092203 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:13:39.092229 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:13:39.092247 kernel: Key type dns_resolver registered Mar 7 01:13:39.092266 kernel: IPI shorthand broadcast: enabled Mar 7 01:13:39.092282 kernel: sched_clock: Marking stable (862004783, 132318920)->(1007408309, -13084606) Mar 7 01:13:39.092299 kernel: registered taskstats version 1 Mar 7 01:13:39.092317 kernel: Loading compiled-in X.509 certificates Mar 7 01:13:39.092337 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:13:39.092356 kernel: Key type .fscrypt registered Mar 7 01:13:39.092375 kernel: Key type fscrypt-provisioning registered Mar 7 01:13:39.092398 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:13:39.092417 kernel: ima: No architecture policies found Mar 7 01:13:39.092435 kernel: clk: Disabling unused clocks Mar 7 01:13:39.092454 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:13:39.092472 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:13:39.092490 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:13:39.092508 kernel: Run /init as init process Mar 7 01:13:39.092525 kernel: with arguments: Mar 7 01:13:39.092543 kernel: /init Mar 7 01:13:39.092565 kernel: with environment: Mar 7 01:13:39.092584 kernel: HOME=/ Mar 7 01:13:39.092612 kernel: TERM=linux Mar 7 01:13:39.092631 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:13:39.092654 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:13:39.092677 systemd[1]: Detected virtualization google. Mar 7 01:13:39.092696 systemd[1]: Detected architecture x86-64. Mar 7 01:13:39.092719 systemd[1]: Running in initrd. Mar 7 01:13:39.092738 systemd[1]: No hostname configured, using default hostname. Mar 7 01:13:39.092757 systemd[1]: Hostname set to . Mar 7 01:13:39.092778 systemd[1]: Initializing machine ID from random generator. Mar 7 01:13:39.092797 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:13:39.092816 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:13:39.092836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:13:39.092856 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:13:39.092880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:13:39.092942 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:13:39.092963 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:13:39.092985 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:13:39.093004 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:13:39.093024 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:13:39.093043 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:13:39.093066 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:13:39.093087 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:13:39.093127 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:13:39.093151 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:13:39.093171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:13:39.093192 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:13:39.093212 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:13:39.093236 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:13:39.093256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:13:39.093277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:13:39.093298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:13:39.093318 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:13:39.093339 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:13:39.093359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:13:39.093379 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:13:39.093403 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:13:39.093423 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:13:39.093443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:13:39.093464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:13:39.093520 systemd-journald[184]: Collecting audit messages is disabled. Mar 7 01:13:39.093578 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:13:39.093616 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:13:39.093638 systemd-journald[184]: Journal started Mar 7 01:13:39.093678 systemd-journald[184]: Runtime Journal (/run/log/journal/993e848e5a444a8eb6fa5374f418db72) is 8.0M, max 148.7M, 140.7M free. Mar 7 01:13:39.102915 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:13:39.102987 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:13:39.115258 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:13:39.123427 systemd-modules-load[185]: Inserted module 'overlay' Mar 7 01:13:39.124106 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:13:39.133265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:13:39.141361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:13:39.160085 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:13:39.169104 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:13:39.176060 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:13:39.169663 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:13:39.181031 kernel: Bridge firewalling registered Mar 7 01:13:39.177265 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 7 01:13:39.182234 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:13:39.197283 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:13:39.214264 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:13:39.221659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:13:39.226683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:13:39.236132 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:13:39.248091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:13:39.268757 dracut-cmdline[217]: dracut-dracut-053 Mar 7 01:13:39.273483 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:13:39.298770 systemd-resolved[218]: Positive Trust Anchors: Mar 7 01:13:39.298790 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:13:39.298854 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:13:39.306186 systemd-resolved[218]: Defaulting to hostname 'linux'. Mar 7 01:13:39.307940 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:13:39.314648 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:13:39.382949 kernel: SCSI subsystem initialized Mar 7 01:13:39.393920 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:13:39.406934 kernel: iscsi: registered transport (tcp) Mar 7 01:13:39.431421 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:13:39.431505 kernel: QLogic iSCSI HBA Driver Mar 7 01:13:39.486225 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:13:39.503201 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:13:39.533930 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:13:39.534018 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:13:39.534043 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:13:39.579939 kernel: raid6: avx2x4 gen() 18168 MB/s Mar 7 01:13:39.596922 kernel: raid6: avx2x2 gen() 17627 MB/s Mar 7 01:13:39.614401 kernel: raid6: avx2x1 gen() 13456 MB/s Mar 7 01:13:39.614488 kernel: raid6: using algorithm avx2x4 gen() 18168 MB/s Mar 7 01:13:39.632554 kernel: raid6: .... xor() 7450 MB/s, rmw enabled Mar 7 01:13:39.632621 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:13:39.655935 kernel: xor: automatically using best checksumming function avx Mar 7 01:13:39.831010 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:13:39.844405 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:13:39.850261 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:13:39.878015 systemd-udevd[401]: Using default interface naming scheme 'v255'. Mar 7 01:13:39.885048 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:13:39.896170 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:13:39.928853 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Mar 7 01:13:39.968410 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:13:39.973183 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:13:40.066810 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:13:40.081106 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:13:40.120346 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:13:40.132392 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:13:40.142010 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:13:40.147214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:13:40.167032 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:13:40.207964 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:13:40.211874 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:13:40.241270 kernel: scsi host0: Virtio SCSI HBA Mar 7 01:13:40.241416 kernel: blk-mq: reduced tag depth to 10240 Mar 7 01:13:40.249622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:13:40.258955 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 7 01:13:40.259314 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:13:40.259347 kernel: AES CTR mode by8 optimization enabled Mar 7 01:13:40.249944 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:13:40.264764 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:13:40.285208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:13:40.285820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:13:40.293121 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:13:40.304424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:13:40.347806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:13:40.360525 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Mar 7 01:13:40.360997 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 7 01:13:40.364007 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 7 01:13:40.364317 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 7 01:13:40.364573 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 01:13:40.366135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:13:40.376328 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:13:40.376407 kernel: GPT:17805311 != 33554431 Mar 7 01:13:40.376442 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:13:40.376466 kernel: GPT:17805311 != 33554431 Mar 7 01:13:40.376494 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:13:40.376518 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:40.378921 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 7 01:13:40.387821 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:13:40.439918 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (454) Mar 7 01:13:40.439994 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (458) Mar 7 01:13:40.439847 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 7 01:13:40.468528 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 7 01:13:40.476415 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 7 01:13:40.483328 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 7 01:13:40.483597 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 7 01:13:40.496292 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:13:40.514533 disk-uuid[552]: Primary Header is updated. Mar 7 01:13:40.514533 disk-uuid[552]: Secondary Entries is updated. Mar 7 01:13:40.514533 disk-uuid[552]: Secondary Header is updated. Mar 7 01:13:40.526914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:40.536012 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:40.549919 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:41.560013 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:13:41.560692 disk-uuid[553]: The operation has completed successfully. Mar 7 01:13:41.630479 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:13:41.630630 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:13:41.661123 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:13:41.691312 sh[570]: Success Mar 7 01:13:41.715952 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:13:41.800480 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:13:41.807945 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:13:41.836471 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:13:41.884297 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:13:41.884404 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:13:41.884431 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:13:41.900574 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:13:41.900636 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:13:41.934932 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:13:41.941913 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:13:41.942915 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:13:41.947131 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:13:42.026087 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:42.026135 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:13:42.026163 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:13:42.026187 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:13:42.026212 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:13:42.004632 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:13:42.050968 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:42.065672 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:13:42.093159 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:13:42.133045 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:13:42.140248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:13:42.220317 systemd-networkd[752]: lo: Link UP Mar 7 01:13:42.220331 systemd-networkd[752]: lo: Gained carrier Mar 7 01:13:42.222999 systemd-networkd[752]: Enumeration completed Mar 7 01:13:42.223576 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:13:42.223584 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:13:42.224387 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:13:42.226408 systemd-networkd[752]: eth0: Link UP Mar 7 01:13:42.226415 systemd-networkd[752]: eth0: Gained carrier Mar 7 01:13:42.296117 ignition[709]: Ignition 2.19.0 Mar 7 01:13:42.226428 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:13:42.296128 ignition[709]: Stage: fetch-offline Mar 7 01:13:42.261011 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:13:42.296174 ignition[709]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:42.261035 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.18/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 7 01:13:42.296186 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:42.299499 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:13:42.296310 ignition[709]: parsed url from cmdline: "" Mar 7 01:13:42.307488 systemd[1]: Reached target network.target - Network. Mar 7 01:13:42.296317 ignition[709]: no config URL provided Mar 7 01:13:42.338099 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:13:42.296326 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:13:42.393597 unknown[762]: fetched base config from "system" Mar 7 01:13:42.296340 ignition[709]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:13:42.393607 unknown[762]: fetched base config from "system" Mar 7 01:13:42.296349 ignition[709]: failed to fetch config: resource requires networking Mar 7 01:13:42.393614 unknown[762]: fetched user config from "gcp" Mar 7 01:13:42.296641 ignition[709]: Ignition finished successfully Mar 7 01:13:42.397288 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:13:42.383876 ignition[762]: Ignition 2.19.0 Mar 7 01:13:42.417125 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:13:42.383887 ignition[762]: Stage: fetch Mar 7 01:13:42.464954 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:13:42.384148 ignition[762]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:42.489126 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:13:42.384167 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:42.517632 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:13:42.384342 ignition[762]: parsed url from cmdline: "" Mar 7 01:13:42.542054 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:13:42.384350 ignition[762]: no config URL provided Mar 7 01:13:42.551311 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:13:42.384360 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:13:42.566345 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:13:42.384371 ignition[762]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:13:42.583320 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:13:42.384401 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 7 01:13:42.600301 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:13:42.387981 ignition[762]: GET result: OK Mar 7 01:13:42.621264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:13:42.388055 ignition[762]: parsing config with SHA512: ad635e12abec85915ffff05f03636a97a12d3584b108630711ac5dde80d3b5a6e7d2ad569bfd965589520126a68d29214085c949268e33db4c332fa7c7bf3d71 Mar 7 01:13:42.395229 ignition[762]: fetch: fetch complete Mar 7 01:13:42.395240 ignition[762]: fetch: fetch passed Mar 7 01:13:42.395322 ignition[762]: Ignition finished successfully Mar 7 01:13:42.461653 ignition[767]: Ignition 2.19.0 Mar 7 01:13:42.461663 ignition[767]: Stage: kargs Mar 7 01:13:42.461869 ignition[767]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:42.461882 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:42.463751 ignition[767]: kargs: kargs passed Mar 7 01:13:42.463837 ignition[767]: Ignition finished successfully Mar 7 01:13:42.510040 ignition[774]: Ignition 2.19.0 Mar 7 01:13:42.510053 ignition[774]: Stage: disks Mar 7 01:13:42.510382 ignition[774]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:42.510401 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:42.512438 ignition[774]: disks: disks passed Mar 7 01:13:42.512524 ignition[774]: Ignition finished successfully Mar 7 01:13:42.685829 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 01:13:42.855007 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:13:42.860057 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:13:43.013501 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:13:43.013392 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:13:43.022881 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:13:43.052052 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:13:43.099824 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (790) Mar 7 01:13:43.099868 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:43.099929 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:13:43.099955 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:13:43.093421 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:13:43.126931 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:13:43.127029 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:13:43.127477 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:13:43.127576 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:13:43.127624 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:13:43.153403 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:13:43.170469 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:13:43.201152 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:13:43.342563 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:13:43.353083 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:13:43.363022 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:13:43.373067 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:13:43.518856 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:13:43.526233 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:13:43.560948 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:43.567164 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:13:43.577179 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:13:43.606705 ignition[902]: INFO : Ignition 2.19.0 Mar 7 01:13:43.606705 ignition[902]: INFO : Stage: mount Mar 7 01:13:43.621055 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:43.621055 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:43.621055 ignition[902]: INFO : mount: mount passed Mar 7 01:13:43.621055 ignition[902]: INFO : Ignition finished successfully Mar 7 01:13:43.610100 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:13:43.643431 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:13:43.671029 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:13:43.705139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:13:43.736927 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (914) Mar 7 01:13:43.755118 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:13:43.755211 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:13:43.755239 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:13:43.777145 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 01:13:43.777251 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:13:43.780235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:13:43.815217 ignition[931]: INFO : Ignition 2.19.0 Mar 7 01:13:43.815217 ignition[931]: INFO : Stage: files Mar 7 01:13:43.830023 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:43.830023 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:43.830023 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:13:43.830023 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:13:43.830023 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:13:43.830023 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:13:43.830023 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:13:43.830023 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:13:43.830023 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:13:43.830023 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:13:43.830023 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:13:43.830023 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:13:43.823803 unknown[931]: wrote ssh authorized keys file for user: core Mar 7 01:13:43.997038 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:13:44.159293 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:13:44.176089 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:13:44.181168 systemd-networkd[752]: eth0: Gained IPv6LL Mar 7 01:13:44.590648 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:13:45.246244 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:13:45.246244 ignition[931]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:13:45.283226 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:13:45.283226 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:13:45.283226 ignition[931]: INFO : files: files passed Mar 7 01:13:45.283226 ignition[931]: INFO : Ignition finished successfully Mar 7 01:13:45.252926 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:13:45.271146 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:13:45.300002 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:13:45.391687 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:13:45.547132 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:13:45.547132 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:13:45.391811 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:13:45.614083 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:13:45.403571 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:13:45.424535 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:13:45.452151 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:13:45.543835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:13:45.544140 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:13:45.558521 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:13:45.572334 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:13:45.604286 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:13:45.611216 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:13:45.661086 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:13:45.686148 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:13:45.729554 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:13:45.742267 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:13:45.766339 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:13:45.785298 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:13:45.785531 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:13:45.819358 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:13:45.839300 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:13:45.857339 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:13:45.875358 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:13:45.894334 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:13:45.917288 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:13:45.937246 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:13:45.956348 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:13:45.976333 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:13:45.996287 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:13:46.014185 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:13:46.014359 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:13:46.045385 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:13:46.065260 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:13:46.086212 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:13:46.086459 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:13:46.104208 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:13:46.104380 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:13:46.232086 ignition[983]: INFO : Ignition 2.19.0 Mar 7 01:13:46.232086 ignition[983]: INFO : Stage: umount Mar 7 01:13:46.232086 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:13:46.232086 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 7 01:13:46.232086 ignition[983]: INFO : umount: umount passed Mar 7 01:13:46.232086 ignition[983]: INFO : Ignition finished successfully Mar 7 01:13:46.132362 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:13:46.132594 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:13:46.154340 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:13:46.154498 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:13:46.182163 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:13:46.211304 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:13:46.222096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:13:46.222374 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:13:46.244355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:13:46.244575 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:13:46.260559 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:13:46.261666 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:13:46.261782 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:13:46.274574 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:13:46.274693 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:13:46.292590 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:13:46.292724 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:13:46.312266 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:13:46.312362 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:13:46.333162 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:13:46.333251 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:13:46.350137 systemd[1]: Stopped target network.target - Network. Mar 7 01:13:46.365064 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:13:46.365197 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:13:46.386139 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:13:46.404083 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:13:46.408048 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:13:46.423067 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:13:46.438076 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:13:46.455128 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:13:46.455233 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:13:46.473143 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:13:46.473241 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:13:46.493146 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:13:46.493262 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:13:46.513163 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:13:46.513259 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:13:46.533172 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:13:46.533275 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:13:46.553504 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:13:46.559983 systemd-networkd[752]: eth0: DHCPv6 lease lost Mar 7 01:13:46.580282 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:13:46.598749 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:13:46.598918 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:13:46.608655 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:13:46.608804 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:13:46.634801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:13:46.634943 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:13:46.647993 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:13:46.648064 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:13:46.686079 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:13:46.696251 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:13:46.696344 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:13:47.206103 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 7 01:13:46.722303 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:13:46.722396 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:13:46.740303 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:13:46.740389 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:13:46.760251 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:13:46.760333 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:13:46.770494 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:13:46.788803 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:13:46.789017 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:13:46.824392 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:13:46.824462 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:13:46.831285 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:13:46.831335 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:13:46.848273 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:13:46.848367 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:13:46.889052 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:13:46.889183 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:13:46.907275 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:13:46.907364 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:13:46.941320 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:13:46.954265 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:13:46.954353 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:13:46.990302 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:13:46.990390 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:13:47.000415 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:13:47.000486 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:13:47.032233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:13:47.032315 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:13:47.039808 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:13:47.039967 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:13:47.074592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:13:47.074713 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:13:47.085734 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:13:47.126159 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:13:47.159709 systemd[1]: Switching root. Mar 7 01:13:47.555073 systemd-journald[184]: Journal stopped Mar 7 01:13:49.917234 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:13:49.917288 kernel: SELinux: policy capability open_perms=1 Mar 7 01:13:49.917311 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:13:49.917329 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:13:49.917346 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:13:49.917364 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:13:49.917385 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:13:49.917408 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:13:49.917427 kernel: audit: type=1403 audit(1772846027.911:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:13:49.917449 systemd[1]: Successfully loaded SELinux policy in 94.356ms. Mar 7 01:13:49.917472 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.672ms. Mar 7 01:13:49.917497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:13:49.917518 systemd[1]: Detected virtualization google. Mar 7 01:13:49.917538 systemd[1]: Detected architecture x86-64. Mar 7 01:13:49.917564 systemd[1]: Detected first boot. Mar 7 01:13:49.917586 systemd[1]: Initializing machine ID from random generator. Mar 7 01:13:49.917608 zram_generator::config[1041]: No configuration found. Mar 7 01:13:49.917631 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:13:49.917652 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:13:49.917677 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:13:49.917699 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:13:49.917721 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:13:49.917742 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:13:49.917763 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:13:49.917785 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:13:49.917807 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:13:49.917833 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:13:49.917855 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:13:49.917876 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:13:49.917913 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:13:49.917935 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:13:49.917957 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:13:49.917982 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:13:49.918003 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:13:49.918030 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:13:49.918051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:13:49.918073 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:13:49.918095 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:13:49.918117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:13:49.918139 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:13:49.918166 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:13:49.918188 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:13:49.918217 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:13:49.918244 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:13:49.918268 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:13:49.918290 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:13:49.918312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:13:49.918335 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:13:49.918357 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:13:49.918379 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:13:49.918407 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:13:49.918430 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:13:49.918454 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:13:49.918478 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:13:49.918504 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:13:49.918527 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:13:49.918550 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:13:49.918574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:13:49.918596 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:13:49.918619 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:13:49.918642 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:13:49.918664 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:13:49.918688 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:13:49.918714 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:13:49.918736 kernel: ACPI: bus type drm_connector registered Mar 7 01:13:49.918757 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:13:49.918780 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:13:49.918803 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 01:13:49.918827 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 01:13:49.918849 kernel: fuse: init (API version 7.39) Mar 7 01:13:49.918870 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:13:49.918917 kernel: loop: module loaded Mar 7 01:13:49.918939 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:13:49.918962 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:13:49.919018 systemd-journald[1146]: Collecting audit messages is disabled. Mar 7 01:13:49.919068 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:13:49.919091 systemd-journald[1146]: Journal started Mar 7 01:13:49.919135 systemd-journald[1146]: Runtime Journal (/run/log/journal/afe798929690423dbbe7ec8c32049403) is 8.0M, max 148.7M, 140.7M free. Mar 7 01:13:49.961957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:13:49.987929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:13:49.999963 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:13:50.011606 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:13:50.021263 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:13:50.031262 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:13:50.041278 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:13:50.051249 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:13:50.061238 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:13:50.071608 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:13:50.083593 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:13:50.095476 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:13:50.095755 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:13:50.107485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:13:50.107751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:13:50.119455 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:13:50.119720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:13:50.130478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:13:50.130765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:13:50.142425 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:13:50.142688 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:13:50.153397 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:13:50.153656 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:13:50.163476 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:13:50.173479 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:13:50.185454 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:13:50.197490 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:13:50.221869 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:13:50.242047 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:13:50.257043 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:13:50.267111 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:13:50.280167 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:13:50.301244 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:13:50.313145 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:13:50.321208 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:13:50.330633 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:13:50.336148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:13:50.344030 systemd-journald[1146]: Time spent on flushing to /var/log/journal/afe798929690423dbbe7ec8c32049403 is 93.915ms for 919 entries. Mar 7 01:13:50.344030 systemd-journald[1146]: System Journal (/var/log/journal/afe798929690423dbbe7ec8c32049403) is 8.0M, max 584.8M, 576.8M free. Mar 7 01:13:50.465294 systemd-journald[1146]: Received client request to flush runtime journal. Mar 7 01:13:50.363118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:13:50.384114 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:13:50.405696 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:13:50.417281 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:13:50.429656 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:13:50.441709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:13:50.457693 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:13:50.469983 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:13:50.476107 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 7 01:13:50.476142 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 7 01:13:50.489345 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:13:50.493356 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:13:50.514170 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:13:50.579235 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:13:50.598153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:13:50.642535 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Mar 7 01:13:50.643105 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Mar 7 01:13:50.651542 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:13:51.128187 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:13:51.146137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:13:51.197557 systemd-udevd[1209]: Using default interface naming scheme 'v255'. Mar 7 01:13:51.237839 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:13:51.264107 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:13:51.302187 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:13:51.366978 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 7 01:13:51.408558 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:13:51.469926 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:13:51.478991 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:13:51.486935 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 7 01:13:51.492962 kernel: ACPI: button: Sleep Button [SLPF] Mar 7 01:13:51.576049 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1230) Mar 7 01:13:51.636225 systemd-networkd[1221]: lo: Link UP Mar 7 01:13:51.637068 systemd-networkd[1221]: lo: Gained carrier Mar 7 01:13:51.640418 systemd-networkd[1221]: Enumeration completed Mar 7 01:13:51.642105 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:13:51.642613 systemd-networkd[1221]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:13:51.642731 systemd-networkd[1221]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:13:51.644033 systemd-networkd[1221]: eth0: Link UP Mar 7 01:13:51.644141 systemd-networkd[1221]: eth0: Gained carrier Mar 7 01:13:51.644232 systemd-networkd[1221]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:13:51.653978 systemd-networkd[1221]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:13:51.654128 systemd-networkd[1221]: eth0: DHCPv4 address 10.128.0.18/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 7 01:13:51.669012 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 7 01:13:51.707235 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:13:51.721055 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:13:51.737941 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:13:51.751139 systemd-networkd[1221]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:13:51.763918 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:13:51.776265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:13:51.793554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 7 01:13:51.817236 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:13:51.825254 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:13:51.844951 lvm[1253]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:13:51.878697 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:13:51.879833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:13:51.892259 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:13:51.905943 lvm[1257]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:13:51.912719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:13:51.950757 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:13:51.963075 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:13:51.974074 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:13:51.974136 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:13:51.984084 systemd[1]: Reached target machines.target - Containers. Mar 7 01:13:51.994568 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:13:52.018192 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:13:52.034143 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:13:52.045261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:13:52.053331 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:13:52.070633 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:13:52.090672 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:13:52.103657 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:13:52.118976 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:13:52.145460 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:13:52.149816 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:13:52.151725 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:13:52.221103 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:13:52.253930 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 01:13:52.322941 kernel: loop2: detected capacity change from 0 to 228704 Mar 7 01:13:52.403016 kernel: loop3: detected capacity change from 0 to 54824 Mar 7 01:13:52.472393 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 01:13:52.514958 kernel: loop5: detected capacity change from 0 to 140768 Mar 7 01:13:52.560814 kernel: loop6: detected capacity change from 0 to 228704 Mar 7 01:13:52.602281 kernel: loop7: detected capacity change from 0 to 54824 Mar 7 01:13:52.623965 (sd-merge)[1281]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Mar 7 01:13:52.624883 (sd-merge)[1281]: Merged extensions into '/usr'. Mar 7 01:13:52.631617 systemd[1]: Reloading requested from client PID 1269 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:13:52.631641 systemd[1]: Reloading... Mar 7 01:13:52.763110 zram_generator::config[1309]: No configuration found. Mar 7 01:13:52.966126 ldconfig[1265]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:13:52.989759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:13:53.013463 systemd-networkd[1221]: eth0: Gained IPv6LL Mar 7 01:13:53.085582 systemd[1]: Reloading finished in 451 ms. Mar 7 01:13:53.106596 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:13:53.118533 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:13:53.128503 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:13:53.156100 systemd[1]: Starting ensure-sysext.service... Mar 7 01:13:53.164529 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:13:53.184052 systemd[1]: Reloading requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:13:53.184440 systemd[1]: Reloading... Mar 7 01:13:53.216534 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:13:53.217274 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:13:53.219137 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:13:53.219738 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Mar 7 01:13:53.219874 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Mar 7 01:13:53.225172 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:13:53.225189 systemd-tmpfiles[1359]: Skipping /boot Mar 7 01:13:53.248309 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:13:53.250008 systemd-tmpfiles[1359]: Skipping /boot Mar 7 01:13:53.319922 zram_generator::config[1386]: No configuration found. Mar 7 01:13:53.466957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:13:53.557441 systemd[1]: Reloading finished in 372 ms. Mar 7 01:13:53.583815 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:13:53.605229 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:13:53.622566 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:13:53.645294 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:13:53.666586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:13:53.685139 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:13:53.698835 augenrules[1454]: No rules Mar 7 01:13:53.703634 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:13:53.727263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:13:53.727762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:13:53.738294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:13:53.757905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:13:53.779107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:13:53.789165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:13:53.789529 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:13:53.799886 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:13:53.812435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:13:53.812742 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:13:53.822964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:13:53.823257 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:13:53.836940 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:13:53.837515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:13:53.852530 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:13:53.865081 systemd-resolved[1450]: Positive Trust Anchors: Mar 7 01:13:53.865105 systemd-resolved[1450]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:13:53.865170 systemd-resolved[1450]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:13:53.865642 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:13:53.872101 systemd-resolved[1450]: Defaulting to hostname 'linux'. Mar 7 01:13:53.877587 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:13:53.895622 systemd[1]: Reached target network.target - Network. Mar 7 01:13:53.904214 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:13:53.914208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:13:53.925212 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:13:53.925563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:13:53.931263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:13:53.954351 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:13:53.980588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:13:53.991187 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:13:54.005330 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:13:54.015056 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:13:54.015275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:13:54.022766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:13:54.023087 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:13:54.035088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:13:54.035381 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:13:54.047768 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:13:54.048135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:13:54.058747 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:13:54.077194 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:13:54.077640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:13:54.083240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:13:54.102858 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:13:54.128289 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:13:54.149381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:13:54.171304 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 01:13:54.180184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:13:54.180934 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:13:54.191165 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:13:54.191349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:13:54.195174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:13:54.195482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:13:54.207793 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:13:54.208091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:13:54.218780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:13:54.219089 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:13:54.230672 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:13:54.231002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:13:54.250488 systemd[1]: Finished ensure-sysext.service. Mar 7 01:13:54.260184 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 01:13:54.282070 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Mar 7 01:13:54.293060 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:13:54.293123 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:13:54.303235 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:13:54.314137 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:13:54.325256 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:13:54.335183 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:13:54.346058 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:13:54.357044 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:13:54.357115 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:13:54.365053 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:13:54.373688 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:13:54.384866 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:13:54.393335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:13:54.394433 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Mar 7 01:13:54.406387 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:13:54.418407 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:13:54.428049 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:13:54.438086 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:13:54.447315 systemd[1]: System is tainted: cgroupsv1 Mar 7 01:13:54.447406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:13:54.447444 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:13:54.453057 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:13:54.477096 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:13:54.498393 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:13:54.523063 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:13:54.544263 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:13:54.555141 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:13:54.559412 jq[1531]: false Mar 7 01:13:54.567030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:13:54.587127 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:13:54.604771 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 01:13:54.615931 coreos-metadata[1528]: Mar 07 01:13:54.615 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Mar 7 01:13:54.621163 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:13:54.626917 coreos-metadata[1528]: Mar 07 01:13:54.625 INFO Fetch successful Mar 7 01:13:54.626917 coreos-metadata[1528]: Mar 07 01:13:54.626 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Mar 7 01:13:54.629228 coreos-metadata[1528]: Mar 07 01:13:54.629 INFO Fetch successful Mar 7 01:13:54.629228 coreos-metadata[1528]: Mar 07 01:13:54.629 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Mar 7 01:13:54.632029 coreos-metadata[1528]: Mar 07 01:13:54.631 INFO Fetch successful Mar 7 01:13:54.632029 coreos-metadata[1528]: Mar 07 01:13:54.631 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Mar 7 01:13:54.638153 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Mar 7 01:13:54.645000 extend-filesystems[1532]: Found loop4 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found loop5 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found loop6 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found loop7 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found sda Mar 7 01:13:54.645000 extend-filesystems[1532]: Found sda1 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found sda2 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found sda3 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found usr Mar 7 01:13:54.645000 extend-filesystems[1532]: Found sda4 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found sda6 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found sda7 Mar 7 01:13:54.645000 extend-filesystems[1532]: Found sda9 Mar 7 01:13:54.645000 extend-filesystems[1532]: Checking size of /dev/sda9 Mar 7 01:13:54.773534 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: ---------------------------------------------------- Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: corporation. Support and training for ntp-4 are Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: available at https://www.nwtime.org/support Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: ---------------------------------------------------- Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: proto: precision = 0.086 usec (-23) Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: basedate set to 2026-02-22 Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: gps base set to 2026-02-22 (week 2407) Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: Listen normally on 3 eth0 10.128.0.18:123 Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: Listen normally on 4 lo [::1]:123 Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:12%2]:123 Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: Listening on routing socket on fd #22 for interface updates Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:13:54.773648 ntpd[1540]: 7 Mar 01:13:54 ntpd[1540]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:13:54.663106 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:13:54.775626 coreos-metadata[1528]: Mar 07 01:13:54.649 INFO Fetch successful Mar 7 01:13:54.698190 dbus-daemon[1530]: [system] SELinux support is enabled Mar 7 01:13:54.784264 extend-filesystems[1532]: Resized partition /dev/sda9 Mar 7 01:13:54.685141 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:13:54.700203 dbus-daemon[1530]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1221 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:13:54.803534 extend-filesystems[1561]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:13:54.819004 init.sh[1542]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 7 01:13:54.819004 init.sh[1542]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 7 01:13:54.819004 init.sh[1542]: + /usr/bin/google_instance_setup Mar 7 01:13:54.709242 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:13:54.744700 ntpd[1540]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:13:54.780606 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:13:54.744779 ntpd[1540]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:13:54.792819 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 7 01:13:54.744795 ntpd[1540]: ---------------------------------------------------- Mar 7 01:13:54.802492 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:13:54.744809 ntpd[1540]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:13:54.744823 ntpd[1540]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:13:54.744878 ntpd[1540]: corporation. Support and training for ntp-4 are Mar 7 01:13:54.744920 ntpd[1540]: available at https://www.nwtime.org/support Mar 7 01:13:54.744963 ntpd[1540]: ---------------------------------------------------- Mar 7 01:13:54.751970 ntpd[1540]: proto: precision = 0.086 usec (-23) Mar 7 01:13:54.753084 ntpd[1540]: basedate set to 2026-02-22 Mar 7 01:13:54.753111 ntpd[1540]: gps base set to 2026-02-22 (week 2407) Mar 7 01:13:54.762648 ntpd[1540]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:13:54.762720 ntpd[1540]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:13:54.764101 ntpd[1540]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:13:54.764170 ntpd[1540]: Listen normally on 3 eth0 10.128.0.18:123 Mar 7 01:13:54.764509 ntpd[1540]: Listen normally on 4 lo [::1]:123 Mar 7 01:13:54.764580 ntpd[1540]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:12%2]:123 Mar 7 01:13:54.764630 ntpd[1540]: Listening on routing socket on fd #22 for interface updates Mar 7 01:13:54.830067 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:13:54.772038 ntpd[1540]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:13:54.772085 ntpd[1540]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:13:54.848013 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:13:54.856928 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Mar 7 01:13:54.886475 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:13:54.891946 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:13:54.900628 extend-filesystems[1561]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 01:13:54.900628 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 01:13:54.900628 extend-filesystems[1561]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Mar 7 01:13:54.933088 extend-filesystems[1532]: Resized filesystem in /dev/sda9 Mar 7 01:13:54.908497 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:13:54.942231 jq[1576]: true Mar 7 01:13:54.916997 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:13:54.947473 update_engine[1572]: I20260307 01:13:54.945380 1572 main.cc:92] Flatcar Update Engine starting Mar 7 01:13:54.953401 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:13:54.953839 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:13:54.966887 update_engine[1572]: I20260307 01:13:54.959875 1572 update_check_scheduler.cc:74] Next update check in 6m41s Mar 7 01:13:54.962677 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:13:54.981724 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:13:54.982211 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:13:55.036951 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1589) Mar 7 01:13:55.069186 (ntainerd)[1594]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:13:55.085645 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:13:55.115751 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:13:55.129124 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:13:55.151376 jq[1591]: true Mar 7 01:13:55.142235 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:13:55.144075 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:13:55.144129 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:13:55.167451 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:13:55.177235 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:13:55.184103 tar[1587]: linux-amd64/LICENSE Mar 7 01:13:55.184103 tar[1587]: linux-amd64/helm Mar 7 01:13:55.177281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:13:55.183309 systemd-logind[1570]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:13:55.183343 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 7 01:13:55.183377 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:13:55.186352 systemd-logind[1570]: New seat seat0. Mar 7 01:13:55.192007 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:13:55.202113 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:13:55.213460 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:13:55.480092 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:13:55.485194 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:13:55.509335 systemd[1]: Starting sshkeys.service... Mar 7 01:13:55.584206 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:13:55.612444 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:13:55.806591 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:13:55.806814 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:13:55.827266 dbus-daemon[1530]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1613 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:13:55.848151 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.869 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.870 INFO Fetch failed with 404: resource not found Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.870 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.870 INFO Fetch successful Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.870 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.870 INFO Fetch failed with 404: resource not found Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.870 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.871 INFO Fetch failed with 404: resource not found Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.871 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 7 01:13:55.871730 coreos-metadata[1638]: Mar 07 01:13:55.871 INFO Fetch successful Mar 7 01:13:55.875785 unknown[1638]: wrote ssh authorized keys file for user: core Mar 7 01:13:55.899501 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:13:55.948918 update-ssh-keys[1652]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:13:55.951401 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:13:55.977105 systemd[1]: Finished sshkeys.service. Mar 7 01:13:56.017685 polkitd[1648]: Started polkitd version 121 Mar 7 01:13:56.045677 polkitd[1648]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:13:56.045787 polkitd[1648]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:13:56.064026 polkitd[1648]: Finished loading, compiling and executing 2 rules Mar 7 01:13:56.066284 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:13:56.066580 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:13:56.067684 polkitd[1648]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:13:56.127958 systemd-hostnamed[1613]: Hostname set to (transient) Mar 7 01:13:56.131615 systemd-resolved[1450]: System hostname changed to 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4'. Mar 7 01:13:56.342455 containerd[1594]: time="2026-03-07T01:13:56.340494718Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:13:56.470952 containerd[1594]: time="2026-03-07T01:13:56.470867290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:13:56.484053 containerd[1594]: time="2026-03-07T01:13:56.483722205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:13:56.484053 containerd[1594]: time="2026-03-07T01:13:56.483780409Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:13:56.484053 containerd[1594]: time="2026-03-07T01:13:56.483932667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:13:56.484299 containerd[1594]: time="2026-03-07T01:13:56.484160551Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:13:56.484299 containerd[1594]: time="2026-03-07T01:13:56.484189697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:13:56.484394 containerd[1594]: time="2026-03-07T01:13:56.484297704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:13:56.484394 containerd[1594]: time="2026-03-07T01:13:56.484321313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:13:56.486510 containerd[1594]: time="2026-03-07T01:13:56.486053500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:13:56.486510 containerd[1594]: time="2026-03-07T01:13:56.486126616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:13:56.486510 containerd[1594]: time="2026-03-07T01:13:56.486154746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:13:56.486510 containerd[1594]: time="2026-03-07T01:13:56.486174904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:13:56.486510 containerd[1594]: time="2026-03-07T01:13:56.486314558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:13:56.487786 containerd[1594]: time="2026-03-07T01:13:56.487260847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:13:56.488952 containerd[1594]: time="2026-03-07T01:13:56.488645377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:13:56.488952 containerd[1594]: time="2026-03-07T01:13:56.488682007Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:13:56.488952 containerd[1594]: time="2026-03-07T01:13:56.488820033Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:13:56.491841 containerd[1594]: time="2026-03-07T01:13:56.488889079Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:13:56.501714 containerd[1594]: time="2026-03-07T01:13:56.500511312Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:13:56.501856 containerd[1594]: time="2026-03-07T01:13:56.501798518Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:13:56.501933 containerd[1594]: time="2026-03-07T01:13:56.501906493Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:13:56.501983 containerd[1594]: time="2026-03-07T01:13:56.501938359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:13:56.501983 containerd[1594]: time="2026-03-07T01:13:56.501963373Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:13:56.502599 containerd[1594]: time="2026-03-07T01:13:56.502183723Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:13:56.505618 containerd[1594]: time="2026-03-07T01:13:56.505442807Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.507980568Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508049228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508074810Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508118609Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508142121Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508181193Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508209170Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508234253Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508274382Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508309100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508346954Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508380657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.508435 containerd[1594]: time="2026-03-07T01:13:56.508403391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509650 containerd[1594]: time="2026-03-07T01:13:56.508459892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509650 containerd[1594]: time="2026-03-07T01:13:56.508483018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509650 containerd[1594]: time="2026-03-07T01:13:56.508535857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509650 containerd[1594]: time="2026-03-07T01:13:56.508561650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509650 containerd[1594]: time="2026-03-07T01:13:56.508603246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509650 containerd[1594]: time="2026-03-07T01:13:56.508628176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509650 containerd[1594]: time="2026-03-07T01:13:56.509590742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509980 containerd[1594]: time="2026-03-07T01:13:56.509693765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509980 containerd[1594]: time="2026-03-07T01:13:56.509724469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509980 containerd[1594]: time="2026-03-07T01:13:56.509770482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.509980 containerd[1594]: time="2026-03-07T01:13:56.509794591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.510799 containerd[1594]: time="2026-03-07T01:13:56.510248033Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:13:56.510799 containerd[1594]: time="2026-03-07T01:13:56.510344945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.510799 containerd[1594]: time="2026-03-07T01:13:56.510409932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.510799 containerd[1594]: time="2026-03-07T01:13:56.510436147Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:13:56.512996 containerd[1594]: time="2026-03-07T01:13:56.512957597Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:13:56.513179 containerd[1594]: time="2026-03-07T01:13:56.513147588Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:13:56.513244 containerd[1594]: time="2026-03-07T01:13:56.513180901Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:13:56.513244 containerd[1594]: time="2026-03-07T01:13:56.513226919Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:13:56.513345 containerd[1594]: time="2026-03-07T01:13:56.513245898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.513345 containerd[1594]: time="2026-03-07T01:13:56.513288223Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:13:56.513345 containerd[1594]: time="2026-03-07T01:13:56.513308496Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:13:56.513345 containerd[1594]: time="2026-03-07T01:13:56.513328060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:13:56.517976 containerd[1594]: time="2026-03-07T01:13:56.515527223Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:13:56.518244 containerd[1594]: time="2026-03-07T01:13:56.518002804Z" level=info msg="Connect containerd service" Mar 7 01:13:56.518244 containerd[1594]: time="2026-03-07T01:13:56.518101334Z" level=info msg="using legacy CRI server" Mar 7 01:13:56.518244 containerd[1594]: time="2026-03-07T01:13:56.518118395Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:13:56.518916 containerd[1594]: time="2026-03-07T01:13:56.518440762Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:13:56.524027 containerd[1594]: time="2026-03-07T01:13:56.523977123Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:13:56.532216 containerd[1594]: time="2026-03-07T01:13:56.532109284Z" level=info msg="Start subscribing containerd event" Mar 7 01:13:56.532216 containerd[1594]: time="2026-03-07T01:13:56.532205205Z" level=info msg="Start recovering state" Mar 7 01:13:56.532348 containerd[1594]: time="2026-03-07T01:13:56.532331826Z" level=info msg="Start event monitor" Mar 7 01:13:56.532397 containerd[1594]: time="2026-03-07T01:13:56.532367128Z" level=info msg="Start snapshots syncer" Mar 7 01:13:56.532744 containerd[1594]: time="2026-03-07T01:13:56.532386871Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:13:56.532744 containerd[1594]: time="2026-03-07T01:13:56.532409473Z" level=info msg="Start streaming server" Mar 7 01:13:56.539526 containerd[1594]: time="2026-03-07T01:13:56.539117420Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:13:56.539526 containerd[1594]: time="2026-03-07T01:13:56.539266309Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:13:56.539596 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:13:56.541051 containerd[1594]: time="2026-03-07T01:13:56.539760588Z" level=info msg="containerd successfully booted in 0.203135s" Mar 7 01:13:56.659294 instance-setup[1555]: INFO Running google_set_multiqueue. Mar 7 01:13:56.705944 instance-setup[1555]: INFO Set channels for eth0 to 2. Mar 7 01:13:56.720136 instance-setup[1555]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 7 01:13:56.726706 instance-setup[1555]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 7 01:13:56.726813 instance-setup[1555]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 7 01:13:56.731090 instance-setup[1555]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 7 01:13:56.734428 instance-setup[1555]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 7 01:13:56.737057 instance-setup[1555]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 7 01:13:56.737443 instance-setup[1555]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 7 01:13:56.740713 instance-setup[1555]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 7 01:13:56.749751 instance-setup[1555]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 7 01:13:56.755186 instance-setup[1555]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 7 01:13:56.755955 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:13:56.759284 instance-setup[1555]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 7 01:13:56.759341 instance-setup[1555]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 7 01:13:56.823579 init.sh[1542]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 7 01:13:56.853632 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:13:56.876531 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:13:56.927549 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:13:56.928024 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:13:56.947407 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:13:56.989138 tar[1587]: linux-amd64/README.md Mar 7 01:13:57.001105 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:13:57.022835 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:13:57.047624 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:13:57.058826 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:13:57.068258 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:13:57.100240 startup-script[1707]: INFO Starting startup scripts. Mar 7 01:13:57.106187 startup-script[1707]: INFO No startup scripts found in metadata. Mar 7 01:13:57.106433 startup-script[1707]: INFO Finished running startup scripts. Mar 7 01:13:57.129463 init.sh[1542]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 7 01:13:57.129463 init.sh[1542]: + daemon_pids=() Mar 7 01:13:57.129463 init.sh[1542]: + for d in accounts clock_skew network Mar 7 01:13:57.129463 init.sh[1542]: + daemon_pids+=($!) Mar 7 01:13:57.129463 init.sh[1542]: + for d in accounts clock_skew network Mar 7 01:13:57.129463 init.sh[1542]: + daemon_pids+=($!) Mar 7 01:13:57.129463 init.sh[1542]: + for d in accounts clock_skew network Mar 7 01:13:57.129463 init.sh[1542]: + daemon_pids+=($!) Mar 7 01:13:57.129463 init.sh[1542]: + NOTIFY_SOCKET=/run/systemd/notify Mar 7 01:13:57.129463 init.sh[1542]: + /usr/bin/systemd-notify --ready Mar 7 01:13:57.130253 init.sh[1728]: + /usr/bin/google_accounts_daemon Mar 7 01:13:57.130714 init.sh[1729]: + /usr/bin/google_clock_skew_daemon Mar 7 01:13:57.131283 init.sh[1730]: + /usr/bin/google_network_daemon Mar 7 01:13:57.141407 systemd[1]: Started oem-gce.service - GCE Linux Agent. Mar 7 01:13:57.156244 init.sh[1542]: + wait -n 1728 1729 1730 Mar 7 01:13:57.461492 google-clock-skew[1729]: INFO Starting Google Clock Skew daemon. Mar 7 01:13:57.474106 google-clock-skew[1729]: INFO Clock drift token has changed: 0. Mar 7 01:13:57.517716 google-networking[1730]: INFO Starting Google Networking daemon. Mar 7 01:13:57.571548 groupadd[1740]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 7 01:13:57.575962 groupadd[1740]: group added to /etc/gshadow: name=google-sudoers Mar 7 01:13:57.641626 groupadd[1740]: new group: name=google-sudoers, GID=1000 Mar 7 01:13:57.671175 google-accounts[1728]: INFO Starting Google Accounts daemon. Mar 7 01:13:57.684291 google-accounts[1728]: WARNING OS Login not installed. Mar 7 01:13:57.685706 google-accounts[1728]: INFO Creating a new user account for 0. Mar 7 01:13:57.691245 init.sh[1748]: useradd: invalid user name '0': use --badname to ignore Mar 7 01:13:57.692030 google-accounts[1728]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 7 01:13:57.805163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:13:57.817660 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:13:57.818648 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:13:57.829013 systemd[1]: Startup finished in 10.167s (kernel) + 10.000s (userspace) = 20.168s. Mar 7 01:13:58.000288 systemd-resolved[1450]: Clock change detected. Flushing caches. Mar 7 01:13:58.000632 google-clock-skew[1729]: INFO Synced system time with hardware clock. Mar 7 01:13:58.742443 kubelet[1758]: E0307 01:13:58.742351 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:13:58.745721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:13:58.746200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:02.062609 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:14:02.070842 systemd[1]: Started sshd@0-10.128.0.18:22-68.220.241.50:55910.service - OpenSSH per-connection server daemon (68.220.241.50:55910). Mar 7 01:14:02.310356 sshd[1769]: Accepted publickey for core from 68.220.241.50 port 55910 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:14:02.313293 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:02.324884 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:14:02.329691 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:14:02.334897 systemd-logind[1570]: New session 1 of user core. Mar 7 01:14:02.356524 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:14:02.365859 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:14:02.394340 (systemd)[1775]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:14:02.532000 systemd[1775]: Queued start job for default target default.target. Mar 7 01:14:02.532639 systemd[1775]: Created slice app.slice - User Application Slice. Mar 7 01:14:02.532683 systemd[1775]: Reached target paths.target - Paths. Mar 7 01:14:02.532709 systemd[1775]: Reached target timers.target - Timers. Mar 7 01:14:02.538519 systemd[1775]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:14:02.549292 systemd[1775]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:14:02.549419 systemd[1775]: Reached target sockets.target - Sockets. Mar 7 01:14:02.549447 systemd[1775]: Reached target basic.target - Basic System. Mar 7 01:14:02.549517 systemd[1775]: Reached target default.target - Main User Target. Mar 7 01:14:02.549572 systemd[1775]: Startup finished in 146ms. Mar 7 01:14:02.549765 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:14:02.558786 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:14:02.745396 systemd[1]: Started sshd@1-10.128.0.18:22-68.220.241.50:55926.service - OpenSSH per-connection server daemon (68.220.241.50:55926). Mar 7 01:14:02.983309 sshd[1787]: Accepted publickey for core from 68.220.241.50 port 55926 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:14:02.984191 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:02.991075 systemd-logind[1570]: New session 2 of user core. Mar 7 01:14:02.997898 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:14:03.158571 sshd[1787]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:03.163332 systemd[1]: sshd@1-10.128.0.18:22-68.220.241.50:55926.service: Deactivated successfully. Mar 7 01:14:03.170583 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:14:03.171035 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:14:03.172928 systemd-logind[1570]: Removed session 2. Mar 7 01:14:03.194780 systemd[1]: Started sshd@2-10.128.0.18:22-68.220.241.50:55938.service - OpenSSH per-connection server daemon (68.220.241.50:55938). Mar 7 01:14:03.408594 sshd[1795]: Accepted publickey for core from 68.220.241.50 port 55938 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:14:03.409719 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:03.416457 systemd-logind[1570]: New session 3 of user core. Mar 7 01:14:03.422846 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:14:03.569216 sshd[1795]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:03.574991 systemd[1]: sshd@2-10.128.0.18:22-68.220.241.50:55938.service: Deactivated successfully. Mar 7 01:14:03.580327 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:14:03.581296 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:14:03.582858 systemd-logind[1570]: Removed session 3. Mar 7 01:14:03.611778 systemd[1]: Started sshd@3-10.128.0.18:22-68.220.241.50:55940.service - OpenSSH per-connection server daemon (68.220.241.50:55940). Mar 7 01:14:03.852507 sshd[1803]: Accepted publickey for core from 68.220.241.50 port 55940 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:14:03.854235 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:03.860916 systemd-logind[1570]: New session 4 of user core. Mar 7 01:14:03.870741 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:14:04.040208 sshd[1803]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:04.048689 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:14:04.050094 systemd[1]: sshd@3-10.128.0.18:22-68.220.241.50:55940.service: Deactivated successfully. Mar 7 01:14:04.053512 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:14:04.054779 systemd-logind[1570]: Removed session 4. Mar 7 01:14:04.081764 systemd[1]: Started sshd@4-10.128.0.18:22-68.220.241.50:55956.service - OpenSSH per-connection server daemon (68.220.241.50:55956). Mar 7 01:14:04.319532 sshd[1811]: Accepted publickey for core from 68.220.241.50 port 55956 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:14:04.321480 sshd[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:04.327843 systemd-logind[1570]: New session 5 of user core. Mar 7 01:14:04.335503 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:14:04.489761 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:14:04.490277 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:14:04.507335 sudo[1815]: pam_unix(sudo:session): session closed for user root Mar 7 01:14:04.543592 sshd[1811]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:04.548592 systemd[1]: sshd@4-10.128.0.18:22-68.220.241.50:55956.service: Deactivated successfully. Mar 7 01:14:04.554522 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:14:04.556138 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:14:04.558150 systemd-logind[1570]: Removed session 5. Mar 7 01:14:04.581861 systemd[1]: Started sshd@5-10.128.0.18:22-68.220.241.50:55968.service - OpenSSH per-connection server daemon (68.220.241.50:55968). Mar 7 01:14:04.796650 sshd[1820]: Accepted publickey for core from 68.220.241.50 port 55968 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:14:04.797567 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:04.804109 systemd-logind[1570]: New session 6 of user core. Mar 7 01:14:04.809841 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:14:04.940411 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:14:04.940927 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:14:04.946517 sudo[1825]: pam_unix(sudo:session): session closed for user root Mar 7 01:14:04.960181 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:14:04.960701 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:14:04.978146 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:14:04.981169 auditctl[1828]: No rules Mar 7 01:14:04.981833 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:14:04.982238 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:14:04.996268 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:14:05.029183 augenrules[1847]: No rules Mar 7 01:14:05.031279 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:14:05.036042 sudo[1824]: pam_unix(sudo:session): session closed for user root Mar 7 01:14:05.067963 sshd[1820]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:05.073753 systemd[1]: sshd@5-10.128.0.18:22-68.220.241.50:55968.service: Deactivated successfully. Mar 7 01:14:05.079198 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:14:05.080123 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:14:05.081500 systemd-logind[1570]: Removed session 6. Mar 7 01:14:05.107132 systemd[1]: Started sshd@6-10.128.0.18:22-68.220.241.50:55972.service - OpenSSH per-connection server daemon (68.220.241.50:55972). Mar 7 01:14:05.317758 sshd[1856]: Accepted publickey for core from 68.220.241.50 port 55972 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:14:05.320423 sshd[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:05.326897 systemd-logind[1570]: New session 7 of user core. Mar 7 01:14:05.333839 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:14:05.461831 sudo[1860]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:14:05.462347 sudo[1860]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:14:05.913749 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:14:05.917975 (dockerd)[1875]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:14:06.353269 dockerd[1875]: time="2026-03-07T01:14:06.353094598Z" level=info msg="Starting up" Mar 7 01:14:06.472336 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3985521317-merged.mount: Deactivated successfully. Mar 7 01:14:07.063402 dockerd[1875]: time="2026-03-07T01:14:07.063196632Z" level=info msg="Loading containers: start." Mar 7 01:14:07.217735 kernel: Initializing XFRM netlink socket Mar 7 01:14:07.331889 systemd-networkd[1221]: docker0: Link UP Mar 7 01:14:07.357772 dockerd[1875]: time="2026-03-07T01:14:07.357720999Z" level=info msg="Loading containers: done." Mar 7 01:14:07.378160 dockerd[1875]: time="2026-03-07T01:14:07.378095474Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:14:07.378432 dockerd[1875]: time="2026-03-07T01:14:07.378242851Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:14:07.378503 dockerd[1875]: time="2026-03-07T01:14:07.378461965Z" level=info msg="Daemon has completed initialization" Mar 7 01:14:07.417093 dockerd[1875]: time="2026-03-07T01:14:07.416943469Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:14:07.417553 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:14:08.320159 containerd[1594]: time="2026-03-07T01:14:08.320104616Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:14:08.846856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:14:08.860104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:08.884424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511237216.mount: Deactivated successfully. Mar 7 01:14:09.210754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:09.227112 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:14:09.317397 kubelet[2030]: E0307 01:14:09.316806 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:14:09.324162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:14:09.325271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:10.924842 containerd[1594]: time="2026-03-07T01:14:10.924752070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:10.926718 containerd[1594]: time="2026-03-07T01:14:10.926651148Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116192" Mar 7 01:14:10.928662 containerd[1594]: time="2026-03-07T01:14:10.927948813Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:10.932310 containerd[1594]: time="2026-03-07T01:14:10.932264798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:10.935383 containerd[1594]: time="2026-03-07T01:14:10.934301844Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.614135753s" Mar 7 01:14:10.935383 containerd[1594]: time="2026-03-07T01:14:10.934381667Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:14:10.935770 containerd[1594]: time="2026-03-07T01:14:10.935739329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:14:12.580999 containerd[1594]: time="2026-03-07T01:14:12.580928051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:12.582637 containerd[1594]: time="2026-03-07T01:14:12.582574702Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021816" Mar 7 01:14:12.583841 containerd[1594]: time="2026-03-07T01:14:12.583765308Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:12.587345 containerd[1594]: time="2026-03-07T01:14:12.587282577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:12.588968 containerd[1594]: time="2026-03-07T01:14:12.588793116Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.652793429s" Mar 7 01:14:12.588968 containerd[1594]: time="2026-03-07T01:14:12.588846577Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:14:12.590208 containerd[1594]: time="2026-03-07T01:14:12.590181042Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:14:14.054242 containerd[1594]: time="2026-03-07T01:14:14.054172443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:14.056015 containerd[1594]: time="2026-03-07T01:14:14.055945656Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162752" Mar 7 01:14:14.057175 containerd[1594]: time="2026-03-07T01:14:14.057101588Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:14.061126 containerd[1594]: time="2026-03-07T01:14:14.061058873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:14.062701 containerd[1594]: time="2026-03-07T01:14:14.062524386Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.472301669s" Mar 7 01:14:14.062701 containerd[1594]: time="2026-03-07T01:14:14.062579754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:14:14.063786 containerd[1594]: time="2026-03-07T01:14:14.063563955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:14:15.358550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143232574.mount: Deactivated successfully. Mar 7 01:14:16.041883 containerd[1594]: time="2026-03-07T01:14:16.041806191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:16.043236 containerd[1594]: time="2026-03-07T01:14:16.043158401Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828653" Mar 7 01:14:16.044637 containerd[1594]: time="2026-03-07T01:14:16.044567830Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:16.047266 containerd[1594]: time="2026-03-07T01:14:16.047199944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:16.048402 containerd[1594]: time="2026-03-07T01:14:16.048198369Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.98457851s" Mar 7 01:14:16.048402 containerd[1594]: time="2026-03-07T01:14:16.048249573Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:14:16.049301 containerd[1594]: time="2026-03-07T01:14:16.049250035Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:14:16.498404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398636471.mount: Deactivated successfully. Mar 7 01:14:17.982663 containerd[1594]: time="2026-03-07T01:14:17.982587138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:17.984632 containerd[1594]: time="2026-03-07T01:14:17.984559064Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Mar 7 01:14:17.986055 containerd[1594]: time="2026-03-07T01:14:17.985479536Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:17.989201 containerd[1594]: time="2026-03-07T01:14:17.989156049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:17.990879 containerd[1594]: time="2026-03-07T01:14:17.990831197Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.9415251s" Mar 7 01:14:17.990987 containerd[1594]: time="2026-03-07T01:14:17.990889184Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:14:17.992286 containerd[1594]: time="2026-03-07T01:14:17.992230662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:14:18.442733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount942182728.mount: Deactivated successfully. Mar 7 01:14:18.449037 containerd[1594]: time="2026-03-07T01:14:18.448975334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:18.450353 containerd[1594]: time="2026-03-07T01:14:18.450275226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Mar 7 01:14:18.451527 containerd[1594]: time="2026-03-07T01:14:18.451452973Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:18.454590 containerd[1594]: time="2026-03-07T01:14:18.454522350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:18.455915 containerd[1594]: time="2026-03-07T01:14:18.455708663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.431282ms" Mar 7 01:14:18.455915 containerd[1594]: time="2026-03-07T01:14:18.455752689Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:14:18.456396 containerd[1594]: time="2026-03-07T01:14:18.456313183Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:14:18.862710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2959639528.mount: Deactivated successfully. Mar 7 01:14:19.575155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:14:19.582621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:19.986319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:19.996455 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:14:20.082041 kubelet[2228]: E0307 01:14:20.081987 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:14:20.085728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:14:20.086136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:20.433660 containerd[1594]: time="2026-03-07T01:14:20.432976978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:20.435208 containerd[1594]: time="2026-03-07T01:14:20.434905948Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718846" Mar 7 01:14:20.436400 containerd[1594]: time="2026-03-07T01:14:20.436341725Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:20.439996 containerd[1594]: time="2026-03-07T01:14:20.439926498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:20.441569 containerd[1594]: time="2026-03-07T01:14:20.441524589Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.985173158s" Mar 7 01:14:20.441678 containerd[1594]: time="2026-03-07T01:14:20.441576035Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:14:25.913911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:25.921724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:25.974541 systemd[1]: Reloading requested from client PID 2278 ('systemctl') (unit session-7.scope)... Mar 7 01:14:25.974565 systemd[1]: Reloading... Mar 7 01:14:26.137416 zram_generator::config[2323]: No configuration found. Mar 7 01:14:26.313437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:14:26.416418 systemd[1]: Reloading finished in 441 ms. Mar 7 01:14:26.448690 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:14:26.487001 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:14:26.487425 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:14:26.488154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:26.495788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:26.813652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:26.831579 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:14:26.884353 kubelet[2386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:14:26.884353 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:14:26.884353 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:14:26.885011 kubelet[2386]: I0307 01:14:26.884463 2386 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:14:27.760944 kubelet[2386]: I0307 01:14:27.760884 2386 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:14:27.760944 kubelet[2386]: I0307 01:14:27.760923 2386 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:14:27.761349 kubelet[2386]: I0307 01:14:27.761310 2386 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:14:27.811234 kubelet[2386]: E0307 01:14:27.811179 2386 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:14:27.813981 kubelet[2386]: I0307 01:14:27.813623 2386 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:14:27.821422 kubelet[2386]: E0307 01:14:27.821312 2386 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:14:27.821422 kubelet[2386]: I0307 01:14:27.821374 2386 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:14:27.825956 kubelet[2386]: I0307 01:14:27.825922 2386 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:14:27.826530 kubelet[2386]: I0307 01:14:27.826480 2386 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:14:27.826778 kubelet[2386]: I0307 01:14:27.826519 2386 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:14:27.826969 kubelet[2386]: I0307 01:14:27.826779 2386 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:14:27.826969 kubelet[2386]: I0307 01:14:27.826800 2386 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:14:27.827089 kubelet[2386]: I0307 01:14:27.827002 2386 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:14:27.837054 kubelet[2386]: I0307 01:14:27.836982 2386 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:14:27.837054 kubelet[2386]: I0307 01:14:27.837031 2386 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:14:27.837306 kubelet[2386]: I0307 01:14:27.837080 2386 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:14:27.837306 kubelet[2386]: I0307 01:14:27.837113 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:14:27.843428 kubelet[2386]: E0307 01:14:27.842424 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4&limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:14:27.843428 kubelet[2386]: E0307 01:14:27.842590 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:14:27.847891 kubelet[2386]: I0307 01:14:27.846813 2386 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:14:27.847891 kubelet[2386]: I0307 01:14:27.847886 2386 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:14:27.849791 kubelet[2386]: W0307 01:14:27.848950 2386 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:14:27.874501 kubelet[2386]: I0307 01:14:27.874451 2386 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:14:27.874940 kubelet[2386]: I0307 01:14:27.874917 2386 server.go:1289] "Started kubelet" Mar 7 01:14:27.877230 kubelet[2386]: I0307 01:14:27.877184 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:14:27.880907 kubelet[2386]: E0307 01:14:27.878866 2386 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4.189a6a1b151b75ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,UID:ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,},FirstTimestamp:2026-03-07 01:14:27.87449393 +0000 UTC m=+1.037357926,LastTimestamp:2026-03-07 01:14:27.87449393 +0000 UTC m=+1.037357926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,}" Mar 7 01:14:27.883664 kubelet[2386]: I0307 01:14:27.883609 2386 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:14:27.884133 kubelet[2386]: I0307 01:14:27.884064 2386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:14:27.886379 kubelet[2386]: I0307 01:14:27.885850 2386 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:14:27.887774 kubelet[2386]: I0307 01:14:27.887239 2386 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:14:27.889781 kubelet[2386]: E0307 01:14:27.889754 2386 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:14:27.891948 kubelet[2386]: I0307 01:14:27.890471 2386 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:14:27.891948 kubelet[2386]: I0307 01:14:27.890588 2386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:14:27.891948 kubelet[2386]: E0307 01:14:27.890883 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" Mar 7 01:14:27.891948 kubelet[2386]: I0307 01:14:27.891448 2386 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:14:27.891948 kubelet[2386]: I0307 01:14:27.891516 2386 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:14:27.892730 kubelet[2386]: E0307 01:14:27.892693 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:14:27.895345 kubelet[2386]: E0307 01:14:27.895255 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="200ms" Mar 7 01:14:27.897790 kubelet[2386]: I0307 01:14:27.897748 2386 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:14:27.900384 kubelet[2386]: I0307 01:14:27.900104 2386 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:14:27.900384 kubelet[2386]: I0307 01:14:27.900123 2386 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:14:27.929119 kubelet[2386]: I0307 01:14:27.929063 2386 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:14:27.931303 kubelet[2386]: I0307 01:14:27.931149 2386 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:14:27.931303 kubelet[2386]: I0307 01:14:27.931181 2386 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:14:27.931303 kubelet[2386]: I0307 01:14:27.931221 2386 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:14:27.931303 kubelet[2386]: I0307 01:14:27.931234 2386 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:14:27.931303 kubelet[2386]: E0307 01:14:27.931297 2386 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:14:27.938121 kubelet[2386]: E0307 01:14:27.937877 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:14:27.956174 kubelet[2386]: I0307 01:14:27.956120 2386 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:14:27.956174 kubelet[2386]: I0307 01:14:27.956143 2386 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:14:27.956174 kubelet[2386]: I0307 01:14:27.956176 2386 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:14:27.958761 kubelet[2386]: I0307 01:14:27.958718 2386 policy_none.go:49] "None policy: Start" Mar 7 01:14:27.958761 kubelet[2386]: I0307 01:14:27.958747 2386 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:14:27.958761 kubelet[2386]: I0307 01:14:27.958766 2386 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:14:27.968390 kubelet[2386]: E0307 01:14:27.967204 2386 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:14:27.968390 kubelet[2386]: I0307 01:14:27.967570 2386 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:14:27.968390 kubelet[2386]: I0307 01:14:27.967591 2386 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:14:27.969957 kubelet[2386]: I0307 01:14:27.969930 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:14:27.972050 kubelet[2386]: E0307 01:14:27.972020 2386 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:14:27.972283 kubelet[2386]: E0307 01:14:27.972264 2386 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" Mar 7 01:14:28.055278 kubelet[2386]: E0307 01:14:28.055134 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.060780 kubelet[2386]: E0307 01:14:28.060728 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.071265 kubelet[2386]: E0307 01:14:28.071208 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.073320 kubelet[2386]: I0307 01:14:28.073287 2386 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.073753 kubelet[2386]: E0307 01:14:28.073717 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.096925 kubelet[2386]: E0307 01:14:28.096863 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="400ms" Mar 7 01:14:28.192265 kubelet[2386]: I0307 01:14:28.192206 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.192265 kubelet[2386]: I0307 01:14:28.192274 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.192265 kubelet[2386]: I0307 01:14:28.192309 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.192886 kubelet[2386]: I0307 01:14:28.192357 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.192886 kubelet[2386]: I0307 01:14:28.192441 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04ec46fdadc20cca3fc5d9007d29ce3d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"04ec46fdadc20cca3fc5d9007d29ce3d\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.192886 kubelet[2386]: I0307 01:14:28.192503 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36c33b29bc177648a26a411b9d30fad2-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"36c33b29bc177648a26a411b9d30fad2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.192886 kubelet[2386]: I0307 01:14:28.192535 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36c33b29bc177648a26a411b9d30fad2-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"36c33b29bc177648a26a411b9d30fad2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.193015 kubelet[2386]: I0307 01:14:28.192565 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36c33b29bc177648a26a411b9d30fad2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"36c33b29bc177648a26a411b9d30fad2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.193015 kubelet[2386]: I0307 01:14:28.192618 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.279154 kubelet[2386]: I0307 01:14:28.279104 2386 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.279715 kubelet[2386]: E0307 01:14:28.279653 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.357179 containerd[1594]: time="2026-03-07T01:14:28.356991014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,Uid:36c33b29bc177648a26a411b9d30fad2,Namespace:kube-system,Attempt:0,}" Mar 7 01:14:28.362040 containerd[1594]: time="2026-03-07T01:14:28.361979899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,Uid:2e4476ef1298861ed9dd157135920139,Namespace:kube-system,Attempt:0,}" Mar 7 01:14:28.373346 containerd[1594]: time="2026-03-07T01:14:28.372979808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,Uid:04ec46fdadc20cca3fc5d9007d29ce3d,Namespace:kube-system,Attempt:0,}" Mar 7 01:14:28.497978 kubelet[2386]: E0307 01:14:28.497905 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="800ms" Mar 7 01:14:28.688531 kubelet[2386]: I0307 01:14:28.688124 2386 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.688781 kubelet[2386]: E0307 01:14:28.688726 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:28.836195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62640056.mount: Deactivated successfully. Mar 7 01:14:28.844099 containerd[1594]: time="2026-03-07T01:14:28.844039460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:14:28.845321 containerd[1594]: time="2026-03-07T01:14:28.845257314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Mar 7 01:14:28.846587 containerd[1594]: time="2026-03-07T01:14:28.846536717Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:14:28.848207 containerd[1594]: time="2026-03-07T01:14:28.848158437Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:14:28.849255 containerd[1594]: time="2026-03-07T01:14:28.849091087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:14:28.849356 containerd[1594]: time="2026-03-07T01:14:28.849317204Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:14:28.850563 containerd[1594]: time="2026-03-07T01:14:28.850505789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:14:28.853395 containerd[1594]: time="2026-03-07T01:14:28.853296764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:14:28.856461 containerd[1594]: time="2026-03-07T01:14:28.855432872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 482.350958ms" Mar 7 01:14:28.858215 containerd[1594]: time="2026-03-07T01:14:28.858165092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.065504ms" Mar 7 01:14:28.860983 containerd[1594]: time="2026-03-07T01:14:28.860753935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.662622ms" Mar 7 01:14:29.062675 containerd[1594]: time="2026-03-07T01:14:29.061693911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:14:29.062675 containerd[1594]: time="2026-03-07T01:14:29.061779003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:14:29.062675 containerd[1594]: time="2026-03-07T01:14:29.061808032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:29.062675 containerd[1594]: time="2026-03-07T01:14:29.061950430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:29.069772 containerd[1594]: time="2026-03-07T01:14:29.068645564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:14:29.069772 containerd[1594]: time="2026-03-07T01:14:29.068781476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:14:29.069772 containerd[1594]: time="2026-03-07T01:14:29.068816340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:29.069772 containerd[1594]: time="2026-03-07T01:14:29.068983411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:29.078661 containerd[1594]: time="2026-03-07T01:14:29.078012019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:14:29.079652 containerd[1594]: time="2026-03-07T01:14:29.079291488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:14:29.079652 containerd[1594]: time="2026-03-07T01:14:29.079343413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:29.079652 containerd[1594]: time="2026-03-07T01:14:29.079531049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:29.132871 kubelet[2386]: E0307 01:14:29.132829 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:14:29.229315 kubelet[2386]: E0307 01:14:29.229233 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:14:29.230242 containerd[1594]: time="2026-03-07T01:14:29.230190894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,Uid:36c33b29bc177648a26a411b9d30fad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f80b3f82623dad6793f3c4340ced8bcb662cbf582dd949c3073af60af351aa7\"" Mar 7 01:14:29.234291 containerd[1594]: time="2026-03-07T01:14:29.234242041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,Uid:2e4476ef1298861ed9dd157135920139,Namespace:kube-system,Attempt:0,} returns sandbox id \"f964d8640ff23f91f6773168e5b57f793e1ced6e0b20e6d63276446cf8beee60\"" Mar 7 01:14:29.234686 containerd[1594]: time="2026-03-07T01:14:29.234649977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4,Uid:04ec46fdadc20cca3fc5d9007d29ce3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ea2510d9c6519a3be1b7c29613a5f3bad0444efed1320559f14dd459d237696\"" Mar 7 01:14:29.234941 kubelet[2386]: E0307 01:14:29.234905 2386 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fc" Mar 7 01:14:29.239398 kubelet[2386]: E0307 01:14:29.238158 2386 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fc" Mar 7 01:14:29.239398 kubelet[2386]: E0307 01:14:29.238226 2386 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cf" Mar 7 01:14:29.243180 containerd[1594]: time="2026-03-07T01:14:29.243145311Z" level=info msg="CreateContainer within sandbox \"3f80b3f82623dad6793f3c4340ced8bcb662cbf582dd949c3073af60af351aa7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:14:29.245352 containerd[1594]: time="2026-03-07T01:14:29.245301612Z" level=info msg="CreateContainer within sandbox \"4ea2510d9c6519a3be1b7c29613a5f3bad0444efed1320559f14dd459d237696\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:14:29.247117 containerd[1594]: time="2026-03-07T01:14:29.247040952Z" level=info msg="CreateContainer within sandbox \"f964d8640ff23f91f6773168e5b57f793e1ced6e0b20e6d63276446cf8beee60\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:14:29.265677 containerd[1594]: time="2026-03-07T01:14:29.265603231Z" level=info msg="CreateContainer within sandbox \"3f80b3f82623dad6793f3c4340ced8bcb662cbf582dd949c3073af60af351aa7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9f5b2571b7a139f97978916ef00a770f381aff45dc24e57c0ba384093d97da13\"" Mar 7 01:14:29.266739 containerd[1594]: time="2026-03-07T01:14:29.266666428Z" level=info msg="StartContainer for \"9f5b2571b7a139f97978916ef00a770f381aff45dc24e57c0ba384093d97da13\"" Mar 7 01:14:29.273928 containerd[1594]: time="2026-03-07T01:14:29.273873145Z" level=info msg="CreateContainer within sandbox \"4ea2510d9c6519a3be1b7c29613a5f3bad0444efed1320559f14dd459d237696\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"39dbaa569c6c6b5aaf12db08b666da7c8c4896fc149d8fec29eb6567faba9f5e\"" Mar 7 01:14:29.274946 containerd[1594]: time="2026-03-07T01:14:29.274774436Z" level=info msg="StartContainer for \"39dbaa569c6c6b5aaf12db08b666da7c8c4896fc149d8fec29eb6567faba9f5e\"" Mar 7 01:14:29.276001 containerd[1594]: time="2026-03-07T01:14:29.275884839Z" level=info msg="CreateContainer within sandbox \"f964d8640ff23f91f6773168e5b57f793e1ced6e0b20e6d63276446cf8beee60\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4e8520b4221c4ba8fcc1b1173c9258d8cf4d418c6786f77ecb54048aec90a107\"" Mar 7 01:14:29.277621 containerd[1594]: time="2026-03-07T01:14:29.277587783Z" level=info msg="StartContainer for \"4e8520b4221c4ba8fcc1b1173c9258d8cf4d418c6786f77ecb54048aec90a107\"" Mar 7 01:14:29.284221 kubelet[2386]: E0307 01:14:29.284182 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:14:29.299348 kubelet[2386]: E0307 01:14:29.299258 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="1.6s" Mar 7 01:14:29.317878 kubelet[2386]: E0307 01:14:29.317486 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4&limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:14:29.423394 containerd[1594]: time="2026-03-07T01:14:29.422116207Z" level=info msg="StartContainer for \"4e8520b4221c4ba8fcc1b1173c9258d8cf4d418c6786f77ecb54048aec90a107\" returns successfully" Mar 7 01:14:29.472418 containerd[1594]: time="2026-03-07T01:14:29.470587067Z" level=info msg="StartContainer for \"9f5b2571b7a139f97978916ef00a770f381aff45dc24e57c0ba384093d97da13\" returns successfully" Mar 7 01:14:29.496642 kubelet[2386]: I0307 01:14:29.496592 2386 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:29.499197 kubelet[2386]: E0307 01:14:29.499140 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:29.518159 containerd[1594]: time="2026-03-07T01:14:29.518104179Z" level=info msg="StartContainer for \"39dbaa569c6c6b5aaf12db08b666da7c8c4896fc149d8fec29eb6567faba9f5e\" returns successfully" Mar 7 01:14:29.953154 kubelet[2386]: E0307 01:14:29.953110 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:29.953694 kubelet[2386]: E0307 01:14:29.953658 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:29.961966 kubelet[2386]: E0307 01:14:29.961928 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:30.967123 kubelet[2386]: E0307 01:14:30.967075 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:30.969323 kubelet[2386]: E0307 01:14:30.969277 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:31.107397 kubelet[2386]: I0307 01:14:31.105965 2386 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.598531 kubelet[2386]: E0307 01:14:33.598460 2386 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.663036 kubelet[2386]: E0307 01:14:33.662699 2386 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.681943 kubelet[2386]: I0307 01:14:33.681907 2386 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.682204 kubelet[2386]: E0307 01:14:33.682185 2386 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\": node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" Mar 7 01:14:33.691630 kubelet[2386]: I0307 01:14:33.691425 2386 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.711618 kubelet[2386]: E0307 01:14:33.711562 2386 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.711618 kubelet[2386]: I0307 01:14:33.711608 2386 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.726584 kubelet[2386]: E0307 01:14:33.726536 2386 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.726584 kubelet[2386]: I0307 01:14:33.726584 2386 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.735432 kubelet[2386]: E0307 01:14:33.734746 2386 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:33.845522 kubelet[2386]: I0307 01:14:33.845478 2386 apiserver.go:52] "Watching apiserver" Mar 7 01:14:33.892694 kubelet[2386]: I0307 01:14:33.891837 2386 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:14:35.637500 systemd[1]: Reloading requested from client PID 2661 ('systemctl') (unit session-7.scope)... Mar 7 01:14:35.637522 systemd[1]: Reloading... Mar 7 01:14:35.773408 zram_generator::config[2701]: No configuration found. Mar 7 01:14:35.915608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:14:36.030846 systemd[1]: Reloading finished in 392 ms. Mar 7 01:14:36.076738 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:36.096154 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:14:36.096661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:36.110495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:36.382611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:36.398771 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:14:36.473306 kubelet[2759]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:14:36.473306 kubelet[2759]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:14:36.473306 kubelet[2759]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:14:36.473935 kubelet[2759]: I0307 01:14:36.473444 2759 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:14:36.482477 kubelet[2759]: I0307 01:14:36.481402 2759 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:14:36.482477 kubelet[2759]: I0307 01:14:36.481430 2759 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:14:36.482477 kubelet[2759]: I0307 01:14:36.481653 2759 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:14:36.483224 kubelet[2759]: I0307 01:14:36.483186 2759 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:14:36.485957 kubelet[2759]: I0307 01:14:36.485912 2759 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:14:36.490431 kubelet[2759]: E0307 01:14:36.490344 2759 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:14:36.490431 kubelet[2759]: I0307 01:14:36.490434 2759 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:14:36.497386 kubelet[2759]: I0307 01:14:36.496018 2759 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:14:36.497386 kubelet[2759]: I0307 01:14:36.496793 2759 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:14:36.497386 kubelet[2759]: I0307 01:14:36.496827 2759 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:14:36.497386 kubelet[2759]: I0307 01:14:36.497212 2759 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:14:36.497753 kubelet[2759]: I0307 01:14:36.497230 2759 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:14:36.497753 kubelet[2759]: I0307 01:14:36.497298 2759 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:14:36.497753 kubelet[2759]: I0307 01:14:36.497575 2759 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:14:36.497753 kubelet[2759]: I0307 01:14:36.497594 2759 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:14:36.497753 kubelet[2759]: I0307 01:14:36.497637 2759 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:14:36.497753 kubelet[2759]: I0307 01:14:36.497668 2759 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:14:36.502262 kubelet[2759]: I0307 01:14:36.502234 2759 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:14:36.509032 kubelet[2759]: I0307 01:14:36.507719 2759 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:14:36.530566 kubelet[2759]: I0307 01:14:36.530230 2759 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:14:36.530566 kubelet[2759]: I0307 01:14:36.530295 2759 server.go:1289] "Started kubelet" Mar 7 01:14:36.535554 kubelet[2759]: I0307 01:14:36.531474 2759 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:14:36.535554 kubelet[2759]: I0307 01:14:36.531898 2759 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:14:36.535554 kubelet[2759]: I0307 01:14:36.531988 2759 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:14:36.535554 kubelet[2759]: I0307 01:14:36.533692 2759 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:14:36.537067 kubelet[2759]: I0307 01:14:36.535701 2759 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:14:36.543424 kubelet[2759]: I0307 01:14:36.542104 2759 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:14:36.545200 kubelet[2759]: I0307 01:14:36.545168 2759 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:14:36.545487 kubelet[2759]: E0307 01:14:36.545458 2759 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" not found" Mar 7 01:14:36.546903 kubelet[2759]: I0307 01:14:36.546860 2759 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:14:36.547041 kubelet[2759]: I0307 01:14:36.547028 2759 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:14:36.557410 kubelet[2759]: I0307 01:14:36.557321 2759 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:14:36.557614 kubelet[2759]: I0307 01:14:36.557583 2759 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:14:36.563228 kubelet[2759]: E0307 01:14:36.563196 2759 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:14:36.564581 kubelet[2759]: I0307 01:14:36.564531 2759 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:14:36.576536 kubelet[2759]: I0307 01:14:36.576482 2759 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:14:36.587685 kubelet[2759]: I0307 01:14:36.587650 2759 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:14:36.588176 kubelet[2759]: I0307 01:14:36.588120 2759 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:14:36.588605 kubelet[2759]: I0307 01:14:36.588586 2759 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:14:36.594800 kubelet[2759]: I0307 01:14:36.594778 2759 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:14:36.595182 kubelet[2759]: E0307 01:14:36.595120 2759 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:14:36.696841 kubelet[2759]: E0307 01:14:36.696399 2759 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:14:36.744871 kubelet[2759]: I0307 01:14:36.744835 2759 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:14:36.744871 kubelet[2759]: I0307 01:14:36.744862 2759 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:14:36.745115 kubelet[2759]: I0307 01:14:36.744911 2759 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:14:36.745181 kubelet[2759]: I0307 01:14:36.745141 2759 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:14:36.745181 kubelet[2759]: I0307 01:14:36.745159 2759 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:14:36.745290 kubelet[2759]: I0307 01:14:36.745189 2759 policy_none.go:49] "None policy: Start" Mar 7 01:14:36.745290 kubelet[2759]: I0307 01:14:36.745216 2759 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:14:36.745290 kubelet[2759]: I0307 01:14:36.745235 2759 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:14:36.746905 kubelet[2759]: I0307 01:14:36.745423 2759 state_mem.go:75] "Updated machine memory state" Mar 7 01:14:36.750278 kubelet[2759]: E0307 01:14:36.748934 2759 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:14:36.750278 kubelet[2759]: I0307 01:14:36.750256 2759 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:14:36.750432 kubelet[2759]: I0307 01:14:36.750274 2759 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:14:36.752851 kubelet[2759]: I0307 01:14:36.751429 2759 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:14:36.756069 kubelet[2759]: E0307 01:14:36.756033 2759 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:14:36.866467 kubelet[2759]: I0307 01:14:36.866397 2759 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.877534 kubelet[2759]: I0307 01:14:36.877121 2759 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.877534 kubelet[2759]: I0307 01:14:36.877226 2759 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.897877 kubelet[2759]: I0307 01:14:36.897816 2759 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.898764 kubelet[2759]: I0307 01:14:36.898636 2759 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.899480 kubelet[2759]: I0307 01:14:36.898987 2759 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.907397 kubelet[2759]: I0307 01:14:36.904755 2759 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 7 01:14:36.907397 kubelet[2759]: I0307 01:14:36.907012 2759 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 7 01:14:36.909190 kubelet[2759]: I0307 01:14:36.909115 2759 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 7 01:14:36.949719 kubelet[2759]: I0307 01:14:36.949482 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36c33b29bc177648a26a411b9d30fad2-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"36c33b29bc177648a26a411b9d30fad2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.949719 kubelet[2759]: I0307 01:14:36.949538 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36c33b29bc177648a26a411b9d30fad2-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"36c33b29bc177648a26a411b9d30fad2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.949719 kubelet[2759]: I0307 01:14:36.949578 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36c33b29bc177648a26a411b9d30fad2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"36c33b29bc177648a26a411b9d30fad2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.949719 kubelet[2759]: I0307 01:14:36.949610 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.950107 kubelet[2759]: I0307 01:14:36.949645 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.950107 kubelet[2759]: I0307 01:14:36.949673 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.950107 kubelet[2759]: I0307 01:14:36.949702 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.950107 kubelet[2759]: I0307 01:14:36.949732 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04ec46fdadc20cca3fc5d9007d29ce3d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"04ec46fdadc20cca3fc5d9007d29ce3d\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:36.950346 kubelet[2759]: I0307 01:14:36.949761 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e4476ef1298861ed9dd157135920139-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" (UID: \"2e4476ef1298861ed9dd157135920139\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:37.513516 kubelet[2759]: I0307 01:14:37.513465 2759 apiserver.go:52] "Watching apiserver" Mar 7 01:14:37.548050 kubelet[2759]: I0307 01:14:37.547992 2759 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:14:37.658233 kubelet[2759]: I0307 01:14:37.658193 2759 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:37.669879 kubelet[2759]: I0307 01:14:37.669844 2759 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 7 01:14:37.670047 kubelet[2759]: E0307 01:14:37.669915 2759 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:14:37.680766 kubelet[2759]: I0307 01:14:37.678953 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" podStartSLOduration=1.678928373 podStartE2EDuration="1.678928373s" podCreationTimestamp="2026-03-07 01:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:14:37.667507242 +0000 UTC m=+1.259879699" watchObservedRunningTime="2026-03-07 01:14:37.678928373 +0000 UTC m=+1.271300830" Mar 7 01:14:37.691722 kubelet[2759]: I0307 01:14:37.691646 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" podStartSLOduration=1.6916235149999999 podStartE2EDuration="1.691623515s" podCreationTimestamp="2026-03-07 01:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:14:37.691425166 +0000 UTC m=+1.283797622" watchObservedRunningTime="2026-03-07 01:14:37.691623515 +0000 UTC m=+1.283995968" Mar 7 01:14:37.691943 kubelet[2759]: I0307 01:14:37.691790 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" podStartSLOduration=1.691782251 podStartE2EDuration="1.691782251s" podCreationTimestamp="2026-03-07 01:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:14:37.679560222 +0000 UTC m=+1.271932679" watchObservedRunningTime="2026-03-07 01:14:37.691782251 +0000 UTC m=+1.284154707" Mar 7 01:14:40.720298 update_engine[1572]: I20260307 01:14:40.720198 1572 update_attempter.cc:509] Updating boot flags... Mar 7 01:14:40.792407 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2816) Mar 7 01:14:40.923395 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2818) Mar 7 01:14:41.036397 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2818) Mar 7 01:14:41.425685 kubelet[2759]: I0307 01:14:41.425643 2759 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:14:41.428751 kubelet[2759]: I0307 01:14:41.426595 2759 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:14:41.428821 containerd[1594]: time="2026-03-07T01:14:41.426278716Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:14:42.587006 kubelet[2759]: I0307 01:14:42.586946 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5cea9122-b378-4acb-975f-3ed04f261345-kube-proxy\") pod \"kube-proxy-jht6k\" (UID: \"5cea9122-b378-4acb-975f-3ed04f261345\") " pod="kube-system/kube-proxy-jht6k" Mar 7 01:14:42.588340 kubelet[2759]: I0307 01:14:42.587015 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cea9122-b378-4acb-975f-3ed04f261345-xtables-lock\") pod \"kube-proxy-jht6k\" (UID: \"5cea9122-b378-4acb-975f-3ed04f261345\") " pod="kube-system/kube-proxy-jht6k" Mar 7 01:14:42.588340 kubelet[2759]: I0307 01:14:42.587049 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cea9122-b378-4acb-975f-3ed04f261345-lib-modules\") pod \"kube-proxy-jht6k\" (UID: \"5cea9122-b378-4acb-975f-3ed04f261345\") " pod="kube-system/kube-proxy-jht6k" Mar 7 01:14:42.588340 kubelet[2759]: I0307 01:14:42.587074 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqld\" (UniqueName: \"kubernetes.io/projected/5cea9122-b378-4acb-975f-3ed04f261345-kube-api-access-mkqld\") pod \"kube-proxy-jht6k\" (UID: \"5cea9122-b378-4acb-975f-3ed04f261345\") " pod="kube-system/kube-proxy-jht6k" Mar 7 01:14:42.687936 kubelet[2759]: I0307 01:14:42.687825 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb58859e-8b4c-4370-b9a0-7ba98a56d372-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-95s5t\" (UID: \"fb58859e-8b4c-4370-b9a0-7ba98a56d372\") " pod="tigera-operator/tigera-operator-6bf85f8dd-95s5t" Mar 7 01:14:42.687936 kubelet[2759]: I0307 01:14:42.687922 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfffx\" (UniqueName: \"kubernetes.io/projected/fb58859e-8b4c-4370-b9a0-7ba98a56d372-kube-api-access-dfffx\") pod \"tigera-operator-6bf85f8dd-95s5t\" (UID: \"fb58859e-8b4c-4370-b9a0-7ba98a56d372\") " pod="tigera-operator/tigera-operator-6bf85f8dd-95s5t" Mar 7 01:14:42.831452 containerd[1594]: time="2026-03-07T01:14:42.831391813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jht6k,Uid:5cea9122-b378-4acb-975f-3ed04f261345,Namespace:kube-system,Attempt:0,}" Mar 7 01:14:42.868776 containerd[1594]: time="2026-03-07T01:14:42.867727204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:14:42.868776 containerd[1594]: time="2026-03-07T01:14:42.867830308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:14:42.868776 containerd[1594]: time="2026-03-07T01:14:42.867882936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:42.868776 containerd[1594]: time="2026-03-07T01:14:42.868149543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:42.921407 containerd[1594]: time="2026-03-07T01:14:42.920751581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-95s5t,Uid:fb58859e-8b4c-4370-b9a0-7ba98a56d372,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:14:42.934961 containerd[1594]: time="2026-03-07T01:14:42.934908256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jht6k,Uid:5cea9122-b378-4acb-975f-3ed04f261345,Namespace:kube-system,Attempt:0,} returns sandbox id \"03726ed89721ad7b9dbe88c2721cb127c154f6135ee100510a34af2003cc919d\"" Mar 7 01:14:42.943666 containerd[1594]: time="2026-03-07T01:14:42.943613270Z" level=info msg="CreateContainer within sandbox \"03726ed89721ad7b9dbe88c2721cb127c154f6135ee100510a34af2003cc919d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:14:42.961098 containerd[1594]: time="2026-03-07T01:14:42.960911846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:14:42.961098 containerd[1594]: time="2026-03-07T01:14:42.961022828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:14:42.961656 containerd[1594]: time="2026-03-07T01:14:42.961483185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:42.961796 containerd[1594]: time="2026-03-07T01:14:42.961679222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:42.965819 containerd[1594]: time="2026-03-07T01:14:42.965766579Z" level=info msg="CreateContainer within sandbox \"03726ed89721ad7b9dbe88c2721cb127c154f6135ee100510a34af2003cc919d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b65da3cd8ce5475cb9ac89379b7aba64bea475c141e03b4f2bd24ce88bb5d4b3\"" Mar 7 01:14:42.967529 containerd[1594]: time="2026-03-07T01:14:42.967493324Z" level=info msg="StartContainer for \"b65da3cd8ce5475cb9ac89379b7aba64bea475c141e03b4f2bd24ce88bb5d4b3\"" Mar 7 01:14:43.072824 containerd[1594]: time="2026-03-07T01:14:43.072630232Z" level=info msg="StartContainer for \"b65da3cd8ce5475cb9ac89379b7aba64bea475c141e03b4f2bd24ce88bb5d4b3\" returns successfully" Mar 7 01:14:43.081419 containerd[1594]: time="2026-03-07T01:14:43.080826497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-95s5t,Uid:fb58859e-8b4c-4370-b9a0-7ba98a56d372,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e89654844fd6ebfe1faf2e2f6d6f06830ae7eb9eb3410e1b3d08cf68a3d186c1\"" Mar 7 01:14:43.085201 containerd[1594]: time="2026-03-07T01:14:43.084698230Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:14:44.078449 kubelet[2759]: I0307 01:14:44.078312 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jht6k" podStartSLOduration=2.078289922 podStartE2EDuration="2.078289922s" podCreationTimestamp="2026-03-07 01:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:14:43.689099019 +0000 UTC m=+7.281471474" watchObservedRunningTime="2026-03-07 01:14:44.078289922 +0000 UTC m=+7.670662377" Mar 7 01:14:44.871997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3520367427.mount: Deactivated successfully. Mar 7 01:14:46.478525 containerd[1594]: time="2026-03-07T01:14:46.478459489Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:46.479916 containerd[1594]: time="2026-03-07T01:14:46.479851170Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:14:46.481389 containerd[1594]: time="2026-03-07T01:14:46.481285197Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:46.484405 containerd[1594]: time="2026-03-07T01:14:46.484324204Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:46.485739 containerd[1594]: time="2026-03-07T01:14:46.485537529Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.400794072s" Mar 7 01:14:46.485739 containerd[1594]: time="2026-03-07T01:14:46.485583033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:14:46.492504 containerd[1594]: time="2026-03-07T01:14:46.491853423Z" level=info msg="CreateContainer within sandbox \"e89654844fd6ebfe1faf2e2f6d6f06830ae7eb9eb3410e1b3d08cf68a3d186c1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:14:46.509940 containerd[1594]: time="2026-03-07T01:14:46.509894264Z" level=info msg="CreateContainer within sandbox \"e89654844fd6ebfe1faf2e2f6d6f06830ae7eb9eb3410e1b3d08cf68a3d186c1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"10f313c4b358e43cc801a62db83668ffc9e34034d39093aa800074f10ab75bc3\"" Mar 7 01:14:46.511653 containerd[1594]: time="2026-03-07T01:14:46.510554460Z" level=info msg="StartContainer for \"10f313c4b358e43cc801a62db83668ffc9e34034d39093aa800074f10ab75bc3\"" Mar 7 01:14:46.592615 containerd[1594]: time="2026-03-07T01:14:46.592563505Z" level=info msg="StartContainer for \"10f313c4b358e43cc801a62db83668ffc9e34034d39093aa800074f10ab75bc3\" returns successfully" Mar 7 01:14:47.143168 kubelet[2759]: I0307 01:14:47.143043 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-95s5t" podStartSLOduration=1.740028331 podStartE2EDuration="5.143016652s" podCreationTimestamp="2026-03-07 01:14:42 +0000 UTC" firstStartedPulling="2026-03-07 01:14:43.084041516 +0000 UTC m=+6.676413972" lastFinishedPulling="2026-03-07 01:14:46.487029543 +0000 UTC m=+10.079402293" observedRunningTime="2026-03-07 01:14:46.699190687 +0000 UTC m=+10.291563144" watchObservedRunningTime="2026-03-07 01:14:47.143016652 +0000 UTC m=+10.735389110" Mar 7 01:14:53.847977 sudo[1860]: pam_unix(sudo:session): session closed for user root Mar 7 01:14:53.883476 sshd[1856]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:53.895883 systemd[1]: sshd@6-10.128.0.18:22-68.220.241.50:55972.service: Deactivated successfully. Mar 7 01:14:53.907680 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:14:53.913569 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:14:53.918597 systemd-logind[1570]: Removed session 7. Mar 7 01:14:55.785443 kubelet[2759]: I0307 01:14:55.784262 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d92594c3-928f-4760-b69a-e6640f3cb833-tigera-ca-bundle\") pod \"calico-typha-f9879c678-2mljm\" (UID: \"d92594c3-928f-4760-b69a-e6640f3cb833\") " pod="calico-system/calico-typha-f9879c678-2mljm" Mar 7 01:14:55.785443 kubelet[2759]: I0307 01:14:55.784325 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcdqv\" (UniqueName: \"kubernetes.io/projected/d92594c3-928f-4760-b69a-e6640f3cb833-kube-api-access-gcdqv\") pod \"calico-typha-f9879c678-2mljm\" (UID: \"d92594c3-928f-4760-b69a-e6640f3cb833\") " pod="calico-system/calico-typha-f9879c678-2mljm" Mar 7 01:14:55.785443 kubelet[2759]: I0307 01:14:55.784374 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d92594c3-928f-4760-b69a-e6640f3cb833-typha-certs\") pod \"calico-typha-f9879c678-2mljm\" (UID: \"d92594c3-928f-4760-b69a-e6640f3cb833\") " pod="calico-system/calico-typha-f9879c678-2mljm" Mar 7 01:14:55.885586 kubelet[2759]: I0307 01:14:55.884793 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-bpffs\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.885586 kubelet[2759]: I0307 01:14:55.884854 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-cni-net-dir\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.885586 kubelet[2759]: I0307 01:14:55.884893 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-tigera-ca-bundle\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.885586 kubelet[2759]: I0307 01:14:55.884938 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-cni-bin-dir\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.885586 kubelet[2759]: I0307 01:14:55.884975 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-flexvol-driver-host\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.886018 kubelet[2759]: I0307 01:14:55.885007 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwm6p\" (UniqueName: \"kubernetes.io/projected/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-kube-api-access-fwm6p\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.886018 kubelet[2759]: I0307 01:14:55.885047 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-policysync\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.886018 kubelet[2759]: I0307 01:14:55.885079 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-var-lib-calico\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.886018 kubelet[2759]: I0307 01:14:55.885160 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-node-certs\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.886018 kubelet[2759]: I0307 01:14:55.885198 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-var-run-calico\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.886279 kubelet[2759]: I0307 01:14:55.885232 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-cni-log-dir\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.886279 kubelet[2759]: I0307 01:14:55.885266 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-sys-fs\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.886279 kubelet[2759]: I0307 01:14:55.885297 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-xtables-lock\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.888392 kubelet[2759]: I0307 01:14:55.886489 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-lib-modules\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.888392 kubelet[2759]: I0307 01:14:55.886575 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/1d979d73-07e8-4f4e-bdbc-54c90feb2cf0-nodeproc\") pod \"calico-node-njjw5\" (UID: \"1d979d73-07e8-4f4e-bdbc-54c90feb2cf0\") " pod="calico-system/calico-node-njjw5" Mar 7 01:14:55.963574 kubelet[2759]: E0307 01:14:55.963516 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:14:55.990142 kubelet[2759]: I0307 01:14:55.988974 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9e38da1e-df53-4611-ac35-2d4dd975c9f5-registration-dir\") pod \"csi-node-driver-njdzt\" (UID: \"9e38da1e-df53-4611-ac35-2d4dd975c9f5\") " pod="calico-system/csi-node-driver-njdzt" Mar 7 01:14:55.990543 kubelet[2759]: I0307 01:14:55.990482 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9e38da1e-df53-4611-ac35-2d4dd975c9f5-socket-dir\") pod \"csi-node-driver-njdzt\" (UID: \"9e38da1e-df53-4611-ac35-2d4dd975c9f5\") " pod="calico-system/csi-node-driver-njdzt" Mar 7 01:14:55.991393 kubelet[2759]: I0307 01:14:55.991336 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9e38da1e-df53-4611-ac35-2d4dd975c9f5-varrun\") pod \"csi-node-driver-njdzt\" (UID: \"9e38da1e-df53-4611-ac35-2d4dd975c9f5\") " pod="calico-system/csi-node-driver-njdzt" Mar 7 01:14:55.992229 kubelet[2759]: I0307 01:14:55.991757 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e38da1e-df53-4611-ac35-2d4dd975c9f5-kubelet-dir\") pod \"csi-node-driver-njdzt\" (UID: \"9e38da1e-df53-4611-ac35-2d4dd975c9f5\") " pod="calico-system/csi-node-driver-njdzt" Mar 7 01:14:55.992455 kubelet[2759]: I0307 01:14:55.992428 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbjrv\" (UniqueName: \"kubernetes.io/projected/9e38da1e-df53-4611-ac35-2d4dd975c9f5-kube-api-access-mbjrv\") pod \"csi-node-driver-njdzt\" (UID: \"9e38da1e-df53-4611-ac35-2d4dd975c9f5\") " pod="calico-system/csi-node-driver-njdzt" Mar 7 01:14:55.993448 kubelet[2759]: E0307 01:14:55.993420 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:55.993587 kubelet[2759]: W0307 01:14:55.993467 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:55.993587 kubelet[2759]: E0307 01:14:55.993496 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:55.994535 kubelet[2759]: E0307 01:14:55.994244 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:55.994535 kubelet[2759]: W0307 01:14:55.994267 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:55.994535 kubelet[2759]: E0307 01:14:55.994286 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:55.995735 kubelet[2759]: E0307 01:14:55.994851 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:55.995735 kubelet[2759]: W0307 01:14:55.994869 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:55.995735 kubelet[2759]: E0307 01:14:55.994890 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:55.995735 kubelet[2759]: E0307 01:14:55.995290 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:55.995735 kubelet[2759]: W0307 01:14:55.995304 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:55.995735 kubelet[2759]: E0307 01:14:55.995338 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:55.995735 kubelet[2759]: E0307 01:14:55.995735 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:55.996165 kubelet[2759]: W0307 01:14:55.995774 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:55.996165 kubelet[2759]: E0307 01:14:55.995793 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:55.996284 kubelet[2759]: E0307 01:14:55.996258 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:55.996344 kubelet[2759]: W0307 01:14:55.996272 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:55.996344 kubelet[2759]: E0307 01:14:55.996310 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:55.997880 kubelet[2759]: E0307 01:14:55.996737 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:55.997880 kubelet[2759]: W0307 01:14:55.996752 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:55.997880 kubelet[2759]: E0307 01:14:55.996781 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:55.997880 kubelet[2759]: E0307 01:14:55.997292 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:55.997880 kubelet[2759]: W0307 01:14:55.997311 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:55.997880 kubelet[2759]: E0307 01:14:55.997328 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.000036 kubelet[2759]: E0307 01:14:55.998038 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.000036 kubelet[2759]: W0307 01:14:55.998056 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.000036 kubelet[2759]: E0307 01:14:55.998118 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.000036 kubelet[2759]: E0307 01:14:55.998764 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.000036 kubelet[2759]: W0307 01:14:55.998780 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.000036 kubelet[2759]: E0307 01:14:55.998797 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.000036 kubelet[2759]: E0307 01:14:55.999250 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.000036 kubelet[2759]: W0307 01:14:55.999293 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.000036 kubelet[2759]: E0307 01:14:55.999312 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.000036 kubelet[2759]: E0307 01:14:55.999946 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.001879 kubelet[2759]: W0307 01:14:55.999966 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.001879 kubelet[2759]: E0307 01:14:56.000022 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.001879 kubelet[2759]: E0307 01:14:56.000533 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.001879 kubelet[2759]: W0307 01:14:56.000547 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.001879 kubelet[2759]: E0307 01:14:56.000580 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.001879 kubelet[2759]: E0307 01:14:56.001033 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.001879 kubelet[2759]: W0307 01:14:56.001047 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.001879 kubelet[2759]: E0307 01:14:56.001063 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.001879 kubelet[2759]: E0307 01:14:56.001570 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.001879 kubelet[2759]: W0307 01:14:56.001585 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.004848 kubelet[2759]: E0307 01:14:56.001610 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.004848 kubelet[2759]: E0307 01:14:56.002039 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.004848 kubelet[2759]: W0307 01:14:56.002053 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.004848 kubelet[2759]: E0307 01:14:56.002069 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.004848 kubelet[2759]: E0307 01:14:56.002545 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.004848 kubelet[2759]: W0307 01:14:56.002562 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.004848 kubelet[2759]: E0307 01:14:56.002579 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.004848 kubelet[2759]: E0307 01:14:56.003110 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.004848 kubelet[2759]: W0307 01:14:56.003127 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.004848 kubelet[2759]: E0307 01:14:56.003143 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.005327 kubelet[2759]: E0307 01:14:56.003626 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.005327 kubelet[2759]: W0307 01:14:56.003641 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.005327 kubelet[2759]: E0307 01:14:56.003668 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.008838 kubelet[2759]: E0307 01:14:56.006010 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.008838 kubelet[2759]: W0307 01:14:56.006033 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.008838 kubelet[2759]: E0307 01:14:56.006058 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.008838 kubelet[2759]: E0307 01:14:56.006461 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.008838 kubelet[2759]: W0307 01:14:56.006476 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.008838 kubelet[2759]: E0307 01:14:56.006492 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.008838 kubelet[2759]: E0307 01:14:56.008037 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.008838 kubelet[2759]: W0307 01:14:56.008053 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.008838 kubelet[2759]: E0307 01:14:56.008071 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.011394 kubelet[2759]: E0307 01:14:56.009737 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.011394 kubelet[2759]: W0307 01:14:56.009756 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.011394 kubelet[2759]: E0307 01:14:56.009773 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.011627 kubelet[2759]: E0307 01:14:56.011495 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.011627 kubelet[2759]: W0307 01:14:56.011511 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.011627 kubelet[2759]: E0307 01:14:56.011528 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.013486 kubelet[2759]: E0307 01:14:56.012092 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.013486 kubelet[2759]: W0307 01:14:56.012109 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.013486 kubelet[2759]: E0307 01:14:56.012126 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.016399 kubelet[2759]: E0307 01:14:56.014831 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.016399 kubelet[2759]: W0307 01:14:56.014851 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.016399 kubelet[2759]: E0307 01:14:56.014870 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.016399 kubelet[2759]: E0307 01:14:56.015305 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.016399 kubelet[2759]: W0307 01:14:56.015339 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.016399 kubelet[2759]: E0307 01:14:56.015356 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.017004 kubelet[2759]: E0307 01:14:56.016968 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.017004 kubelet[2759]: W0307 01:14:56.016990 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.017140 kubelet[2759]: E0307 01:14:56.017008 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.018175 kubelet[2759]: E0307 01:14:56.018148 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.018175 kubelet[2759]: W0307 01:14:56.018172 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.018383 kubelet[2759]: E0307 01:14:56.018191 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.020351 kubelet[2759]: E0307 01:14:56.019720 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.020351 kubelet[2759]: W0307 01:14:56.019740 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.020351 kubelet[2759]: E0307 01:14:56.020053 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.025162 kubelet[2759]: E0307 01:14:56.025138 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.025162 kubelet[2759]: W0307 01:14:56.025160 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.025327 kubelet[2759]: E0307 01:14:56.025179 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.029094 kubelet[2759]: E0307 01:14:56.028922 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.029094 kubelet[2759]: W0307 01:14:56.028945 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.029094 kubelet[2759]: E0307 01:14:56.028966 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.029744 kubelet[2759]: E0307 01:14:56.029571 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.029744 kubelet[2759]: W0307 01:14:56.029590 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.029744 kubelet[2759]: E0307 01:14:56.029607 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.030392 kubelet[2759]: E0307 01:14:56.030239 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.030392 kubelet[2759]: W0307 01:14:56.030257 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.030392 kubelet[2759]: E0307 01:14:56.030273 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.033459 kubelet[2759]: E0307 01:14:56.031546 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.033459 kubelet[2759]: W0307 01:14:56.031563 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.033459 kubelet[2759]: E0307 01:14:56.031580 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.034256 kubelet[2759]: E0307 01:14:56.034238 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.034430 kubelet[2759]: W0307 01:14:56.034355 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.034430 kubelet[2759]: E0307 01:14:56.034402 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.038529 kubelet[2759]: E0307 01:14:56.037929 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.039846 kubelet[2759]: W0307 01:14:56.039620 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.039846 kubelet[2759]: E0307 01:14:56.039647 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.040770 kubelet[2759]: E0307 01:14:56.040593 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.041618 kubelet[2759]: W0307 01:14:56.041418 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.041618 kubelet[2759]: E0307 01:14:56.041450 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.042484 kubelet[2759]: E0307 01:14:56.042058 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.042484 kubelet[2759]: W0307 01:14:56.042075 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.042484 kubelet[2759]: E0307 01:14:56.042092 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.043552 kubelet[2759]: E0307 01:14:56.043114 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.043552 kubelet[2759]: W0307 01:14:56.043131 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.043552 kubelet[2759]: E0307 01:14:56.043147 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.047566 kubelet[2759]: E0307 01:14:56.047546 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.047882 kubelet[2759]: W0307 01:14:56.047695 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.047882 kubelet[2759]: E0307 01:14:56.047721 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.049325 kubelet[2759]: E0307 01:14:56.049215 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.049325 kubelet[2759]: W0307 01:14:56.049234 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.049325 kubelet[2759]: E0307 01:14:56.049251 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.051040 kubelet[2759]: E0307 01:14:56.050622 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.051040 kubelet[2759]: W0307 01:14:56.050641 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.051040 kubelet[2759]: E0307 01:14:56.050658 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.052212 kubelet[2759]: E0307 01:14:56.052014 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.052212 kubelet[2759]: W0307 01:14:56.052033 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.052212 kubelet[2759]: E0307 01:14:56.052050 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.074998 containerd[1594]: time="2026-03-07T01:14:56.074477854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9879c678-2mljm,Uid:d92594c3-928f-4760-b69a-e6640f3cb833,Namespace:calico-system,Attempt:0,}" Mar 7 01:14:56.087269 kubelet[2759]: E0307 01:14:56.086460 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.087269 kubelet[2759]: W0307 01:14:56.086490 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.087269 kubelet[2759]: E0307 01:14:56.086523 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.099348 kubelet[2759]: E0307 01:14:56.094170 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.099348 kubelet[2759]: W0307 01:14:56.094196 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.099348 kubelet[2759]: E0307 01:14:56.094228 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.100602 kubelet[2759]: E0307 01:14:56.100258 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.100602 kubelet[2759]: W0307 01:14:56.100287 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.100602 kubelet[2759]: E0307 01:14:56.100317 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.102379 kubelet[2759]: E0307 01:14:56.100884 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.102379 kubelet[2759]: W0307 01:14:56.100905 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.102379 kubelet[2759]: E0307 01:14:56.100925 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.102379 kubelet[2759]: E0307 01:14:56.102328 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.102379 kubelet[2759]: W0307 01:14:56.102345 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.102379 kubelet[2759]: E0307 01:14:56.102384 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.106643 kubelet[2759]: E0307 01:14:56.104398 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.106643 kubelet[2759]: W0307 01:14:56.104418 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.106643 kubelet[2759]: E0307 01:14:56.104437 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.106643 kubelet[2759]: E0307 01:14:56.105840 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.106643 kubelet[2759]: W0307 01:14:56.105855 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.106643 kubelet[2759]: E0307 01:14:56.105873 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.106643 kubelet[2759]: E0307 01:14:56.106223 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.106643 kubelet[2759]: W0307 01:14:56.106237 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.106643 kubelet[2759]: E0307 01:14:56.106253 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.110274 kubelet[2759]: E0307 01:14:56.107516 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.110274 kubelet[2759]: W0307 01:14:56.107562 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.110274 kubelet[2759]: E0307 01:14:56.107582 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.110274 kubelet[2759]: E0307 01:14:56.109157 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.110274 kubelet[2759]: W0307 01:14:56.109174 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.110274 kubelet[2759]: E0307 01:14:56.109191 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.110274 kubelet[2759]: E0307 01:14:56.109719 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.110274 kubelet[2759]: W0307 01:14:56.109733 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.110274 kubelet[2759]: E0307 01:14:56.109750 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.110274 kubelet[2759]: E0307 01:14:56.110132 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.112935 kubelet[2759]: W0307 01:14:56.110146 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.112935 kubelet[2759]: E0307 01:14:56.110162 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.112935 kubelet[2759]: E0307 01:14:56.110499 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.112935 kubelet[2759]: W0307 01:14:56.110514 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.112935 kubelet[2759]: E0307 01:14:56.110533 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.112935 kubelet[2759]: E0307 01:14:56.110850 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.112935 kubelet[2759]: W0307 01:14:56.110864 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.112935 kubelet[2759]: E0307 01:14:56.110879 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.112935 kubelet[2759]: E0307 01:14:56.111175 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.112935 kubelet[2759]: W0307 01:14:56.111187 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.113482 kubelet[2759]: E0307 01:14:56.111205 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.113482 kubelet[2759]: E0307 01:14:56.111503 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.113482 kubelet[2759]: W0307 01:14:56.111516 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.113482 kubelet[2759]: E0307 01:14:56.111531 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.113482 kubelet[2759]: E0307 01:14:56.111856 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.113482 kubelet[2759]: W0307 01:14:56.111869 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.113482 kubelet[2759]: E0307 01:14:56.111884 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.113482 kubelet[2759]: E0307 01:14:56.112197 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.113482 kubelet[2759]: W0307 01:14:56.112214 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.113482 kubelet[2759]: E0307 01:14:56.112231 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.113960 kubelet[2759]: E0307 01:14:56.112623 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.113960 kubelet[2759]: W0307 01:14:56.112638 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.113960 kubelet[2759]: E0307 01:14:56.112653 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.113960 kubelet[2759]: E0307 01:14:56.112923 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.113960 kubelet[2759]: W0307 01:14:56.112935 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.113960 kubelet[2759]: E0307 01:14:56.112949 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.113960 kubelet[2759]: E0307 01:14:56.113220 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.113960 kubelet[2759]: W0307 01:14:56.113232 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.113960 kubelet[2759]: E0307 01:14:56.113245 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.113960 kubelet[2759]: E0307 01:14:56.113584 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.116411 kubelet[2759]: W0307 01:14:56.113597 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.116411 kubelet[2759]: E0307 01:14:56.113623 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.116411 kubelet[2759]: E0307 01:14:56.115304 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.116411 kubelet[2759]: W0307 01:14:56.115319 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.116411 kubelet[2759]: E0307 01:14:56.115336 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.118292 kubelet[2759]: E0307 01:14:56.116942 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.118292 kubelet[2759]: W0307 01:14:56.116961 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.118292 kubelet[2759]: E0307 01:14:56.116983 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.118292 kubelet[2759]: E0307 01:14:56.117351 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.118292 kubelet[2759]: W0307 01:14:56.117382 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.118292 kubelet[2759]: E0307 01:14:56.117412 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.118292 kubelet[2759]: E0307 01:14:56.117841 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.118292 kubelet[2759]: W0307 01:14:56.117872 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.118292 kubelet[2759]: E0307 01:14:56.117892 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.128388 kubelet[2759]: E0307 01:14:56.128329 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:56.128606 kubelet[2759]: W0307 01:14:56.128356 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:56.128606 kubelet[2759]: E0307 01:14:56.128421 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:56.140826 containerd[1594]: time="2026-03-07T01:14:56.140493184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:14:56.140826 containerd[1594]: time="2026-03-07T01:14:56.140739042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:14:56.141150 containerd[1594]: time="2026-03-07T01:14:56.140801738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:56.142690 containerd[1594]: time="2026-03-07T01:14:56.142530550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:56.162436 containerd[1594]: time="2026-03-07T01:14:56.162351021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-njjw5,Uid:1d979d73-07e8-4f4e-bdbc-54c90feb2cf0,Namespace:calico-system,Attempt:0,}" Mar 7 01:14:56.210808 containerd[1594]: time="2026-03-07T01:14:56.210703202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:14:56.211579 containerd[1594]: time="2026-03-07T01:14:56.211056720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:14:56.211579 containerd[1594]: time="2026-03-07T01:14:56.211339112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:56.212711 containerd[1594]: time="2026-03-07T01:14:56.212631980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:14:56.247319 containerd[1594]: time="2026-03-07T01:14:56.247165376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9879c678-2mljm,Uid:d92594c3-928f-4760-b69a-e6640f3cb833,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f796c81ae58f7559c27fba04682623dc114e37a92007c51eed60d6f2fa552e2\"" Mar 7 01:14:56.252138 containerd[1594]: time="2026-03-07T01:14:56.252099668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:14:56.280440 containerd[1594]: time="2026-03-07T01:14:56.280346321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-njjw5,Uid:1d979d73-07e8-4f4e-bdbc-54c90feb2cf0,Namespace:calico-system,Attempt:0,} returns sandbox id \"544bf11c1d6c30c8c6a52ce1db4d9fa311f4feed68dda9e9aab0127852ea66b0\"" Mar 7 01:14:57.195206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354780173.mount: Deactivated successfully. Mar 7 01:14:57.596348 kubelet[2759]: E0307 01:14:57.596201 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:14:58.145812 containerd[1594]: time="2026-03-07T01:14:58.145731931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:58.147192 containerd[1594]: time="2026-03-07T01:14:58.147118066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:14:58.148687 containerd[1594]: time="2026-03-07T01:14:58.148619116Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:58.151981 containerd[1594]: time="2026-03-07T01:14:58.151943178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:58.153436 containerd[1594]: time="2026-03-07T01:14:58.153055734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.900881732s" Mar 7 01:14:58.153436 containerd[1594]: time="2026-03-07T01:14:58.153105770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:14:58.154878 containerd[1594]: time="2026-03-07T01:14:58.154846363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:14:58.181165 containerd[1594]: time="2026-03-07T01:14:58.181112774Z" level=info msg="CreateContainer within sandbox \"3f796c81ae58f7559c27fba04682623dc114e37a92007c51eed60d6f2fa552e2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:14:58.197568 containerd[1594]: time="2026-03-07T01:14:58.196689395Z" level=info msg="CreateContainer within sandbox \"3f796c81ae58f7559c27fba04682623dc114e37a92007c51eed60d6f2fa552e2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"91b26fd4ac59009c299b902ce1c44b2902d982bde4e952722280ff5d2151db03\"" Mar 7 01:14:58.199595 containerd[1594]: time="2026-03-07T01:14:58.199162547Z" level=info msg="StartContainer for \"91b26fd4ac59009c299b902ce1c44b2902d982bde4e952722280ff5d2151db03\"" Mar 7 01:14:58.313750 containerd[1594]: time="2026-03-07T01:14:58.313427940Z" level=info msg="StartContainer for \"91b26fd4ac59009c299b902ce1c44b2902d982bde4e952722280ff5d2151db03\" returns successfully" Mar 7 01:14:58.788443 kubelet[2759]: E0307 01:14:58.788026 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.790910 kubelet[2759]: W0307 01:14:58.789307 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.790910 kubelet[2759]: E0307 01:14:58.789355 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.792412 kubelet[2759]: E0307 01:14:58.791333 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.792412 kubelet[2759]: W0307 01:14:58.791356 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.792412 kubelet[2759]: E0307 01:14:58.791401 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.794641 kubelet[2759]: E0307 01:14:58.794190 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.794641 kubelet[2759]: W0307 01:14:58.794334 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.794641 kubelet[2759]: E0307 01:14:58.794505 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.796213 kubelet[2759]: E0307 01:14:58.796061 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.796213 kubelet[2759]: W0307 01:14:58.796080 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.796213 kubelet[2759]: E0307 01:14:58.796100 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.797186 kubelet[2759]: E0307 01:14:58.796885 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.797186 kubelet[2759]: W0307 01:14:58.796902 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.797186 kubelet[2759]: E0307 01:14:58.796920 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.798532 kubelet[2759]: E0307 01:14:58.798204 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.798532 kubelet[2759]: W0307 01:14:58.798222 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.798532 kubelet[2759]: E0307 01:14:58.798240 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.799680 kubelet[2759]: E0307 01:14:58.799508 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.799680 kubelet[2759]: W0307 01:14:58.799527 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.799680 kubelet[2759]: E0307 01:14:58.799545 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.800947 kubelet[2759]: E0307 01:14:58.800135 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.800947 kubelet[2759]: W0307 01:14:58.800260 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.800947 kubelet[2759]: E0307 01:14:58.800281 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.802050 kubelet[2759]: E0307 01:14:58.801700 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.802050 kubelet[2759]: W0307 01:14:58.801718 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.802050 kubelet[2759]: E0307 01:14:58.801734 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.803164 kubelet[2759]: E0307 01:14:58.802947 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.803164 kubelet[2759]: W0307 01:14:58.802972 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.803164 kubelet[2759]: E0307 01:14:58.802992 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.806565 kubelet[2759]: E0307 01:14:58.806172 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.806565 kubelet[2759]: W0307 01:14:58.806194 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.806565 kubelet[2759]: E0307 01:14:58.806211 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.810094 kubelet[2759]: E0307 01:14:58.807868 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.810094 kubelet[2759]: W0307 01:14:58.807889 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.810094 kubelet[2759]: E0307 01:14:58.807907 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.810751 kubelet[2759]: I0307 01:14:58.808316 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f9879c678-2mljm" podStartSLOduration=1.904649637 podStartE2EDuration="3.808300079s" podCreationTimestamp="2026-03-07 01:14:55 +0000 UTC" firstStartedPulling="2026-03-07 01:14:56.250984441 +0000 UTC m=+19.843356872" lastFinishedPulling="2026-03-07 01:14:58.154634869 +0000 UTC m=+21.747007314" observedRunningTime="2026-03-07 01:14:58.806877333 +0000 UTC m=+22.399249788" watchObservedRunningTime="2026-03-07 01:14:58.808300079 +0000 UTC m=+22.400672535" Mar 7 01:14:58.811730 kubelet[2759]: E0307 01:14:58.809762 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.812069 kubelet[2759]: W0307 01:14:58.812029 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.812277 kubelet[2759]: E0307 01:14:58.812257 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.813250 kubelet[2759]: E0307 01:14:58.813224 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.813250 kubelet[2759]: W0307 01:14:58.813247 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.813473 kubelet[2759]: E0307 01:14:58.813269 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.813651 kubelet[2759]: E0307 01:14:58.813629 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.813651 kubelet[2759]: W0307 01:14:58.813649 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.813783 kubelet[2759]: E0307 01:14:58.813667 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.840573 kubelet[2759]: E0307 01:14:58.840530 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.840862 kubelet[2759]: W0307 01:14:58.840749 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.840862 kubelet[2759]: E0307 01:14:58.840783 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.841291 kubelet[2759]: E0307 01:14:58.841258 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.841291 kubelet[2759]: W0307 01:14:58.841281 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.841549 kubelet[2759]: E0307 01:14:58.841312 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.841863 kubelet[2759]: E0307 01:14:58.841751 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.841863 kubelet[2759]: W0307 01:14:58.841771 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.841863 kubelet[2759]: E0307 01:14:58.841789 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.842407 kubelet[2759]: E0307 01:14:58.842296 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.842407 kubelet[2759]: W0307 01:14:58.842318 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.842407 kubelet[2759]: E0307 01:14:58.842339 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.842834 kubelet[2759]: E0307 01:14:58.842732 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.842834 kubelet[2759]: W0307 01:14:58.842747 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.842834 kubelet[2759]: E0307 01:14:58.842766 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.843173 kubelet[2759]: E0307 01:14:58.843105 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.843173 kubelet[2759]: W0307 01:14:58.843120 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.843173 kubelet[2759]: E0307 01:14:58.843138 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.843561 kubelet[2759]: E0307 01:14:58.843520 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.843561 kubelet[2759]: W0307 01:14:58.843540 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.843561 kubelet[2759]: E0307 01:14:58.843561 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.844051 kubelet[2759]: E0307 01:14:58.844030 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.844051 kubelet[2759]: W0307 01:14:58.844049 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.844181 kubelet[2759]: E0307 01:14:58.844066 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.844489 kubelet[2759]: E0307 01:14:58.844464 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.844489 kubelet[2759]: W0307 01:14:58.844487 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.844637 kubelet[2759]: E0307 01:14:58.844505 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.844938 kubelet[2759]: E0307 01:14:58.844906 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.844938 kubelet[2759]: W0307 01:14:58.844923 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.845073 kubelet[2759]: E0307 01:14:58.844942 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.845315 kubelet[2759]: E0307 01:14:58.845280 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.845315 kubelet[2759]: W0307 01:14:58.845299 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.845315 kubelet[2759]: E0307 01:14:58.845315 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.845743 kubelet[2759]: E0307 01:14:58.845699 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.845743 kubelet[2759]: W0307 01:14:58.845712 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.845743 kubelet[2759]: E0307 01:14:58.845730 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.846407 kubelet[2759]: E0307 01:14:58.846304 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.846407 kubelet[2759]: W0307 01:14:58.846324 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.846407 kubelet[2759]: E0307 01:14:58.846341 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.847050 kubelet[2759]: E0307 01:14:58.847018 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.847050 kubelet[2759]: W0307 01:14:58.847036 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.847050 kubelet[2759]: E0307 01:14:58.847052 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.847559 kubelet[2759]: E0307 01:14:58.847537 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.847559 kubelet[2759]: W0307 01:14:58.847556 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.847771 kubelet[2759]: E0307 01:14:58.847574 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.847979 kubelet[2759]: E0307 01:14:58.847959 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.847979 kubelet[2759]: W0307 01:14:58.847978 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.848190 kubelet[2759]: E0307 01:14:58.847995 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.848569 kubelet[2759]: E0307 01:14:58.848480 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.848569 kubelet[2759]: W0307 01:14:58.848499 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.848569 kubelet[2759]: E0307 01:14:58.848515 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:58.849203 kubelet[2759]: E0307 01:14:58.849182 2759 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:14:58.849203 kubelet[2759]: W0307 01:14:58.849201 2759 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:14:58.849335 kubelet[2759]: E0307 01:14:58.849218 2759 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:14:59.071193 containerd[1594]: time="2026-03-07T01:14:59.070129232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:59.073996 containerd[1594]: time="2026-03-07T01:14:59.073931955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:14:59.076595 containerd[1594]: time="2026-03-07T01:14:59.075430097Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:59.079816 containerd[1594]: time="2026-03-07T01:14:59.079737658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:59.081388 containerd[1594]: time="2026-03-07T01:14:59.081316645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 926.283003ms" Mar 7 01:14:59.081492 containerd[1594]: time="2026-03-07T01:14:59.081397996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:14:59.086880 containerd[1594]: time="2026-03-07T01:14:59.086827553Z" level=info msg="CreateContainer within sandbox \"544bf11c1d6c30c8c6a52ce1db4d9fa311f4feed68dda9e9aab0127852ea66b0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:14:59.105036 containerd[1594]: time="2026-03-07T01:14:59.104986963Z" level=info msg="CreateContainer within sandbox \"544bf11c1d6c30c8c6a52ce1db4d9fa311f4feed68dda9e9aab0127852ea66b0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"24edd4a5a0ee4d97d8c0d76c2b97abbb139c0d452844979d8c693157b15e10f0\"" Mar 7 01:14:59.107395 containerd[1594]: time="2026-03-07T01:14:59.105882561Z" level=info msg="StartContainer for \"24edd4a5a0ee4d97d8c0d76c2b97abbb139c0d452844979d8c693157b15e10f0\"" Mar 7 01:14:59.198135 containerd[1594]: time="2026-03-07T01:14:59.198081447Z" level=info msg="StartContainer for \"24edd4a5a0ee4d97d8c0d76c2b97abbb139c0d452844979d8c693157b15e10f0\" returns successfully" Mar 7 01:14:59.252889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24edd4a5a0ee4d97d8c0d76c2b97abbb139c0d452844979d8c693157b15e10f0-rootfs.mount: Deactivated successfully. Mar 7 01:14:59.595686 kubelet[2759]: E0307 01:14:59.595597 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:14:59.788873 kubelet[2759]: I0307 01:14:59.788397 2759 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:15:00.180027 containerd[1594]: time="2026-03-07T01:15:00.179932825Z" level=info msg="shim disconnected" id=24edd4a5a0ee4d97d8c0d76c2b97abbb139c0d452844979d8c693157b15e10f0 namespace=k8s.io Mar 7 01:15:00.180027 containerd[1594]: time="2026-03-07T01:15:00.180025556Z" level=warning msg="cleaning up after shim disconnected" id=24edd4a5a0ee4d97d8c0d76c2b97abbb139c0d452844979d8c693157b15e10f0 namespace=k8s.io Mar 7 01:15:00.180027 containerd[1594]: time="2026-03-07T01:15:00.180040319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:15:00.793789 containerd[1594]: time="2026-03-07T01:15:00.793708172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:15:01.596155 kubelet[2759]: E0307 01:15:01.596081 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:15:03.596548 kubelet[2759]: E0307 01:15:03.596467 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:15:05.596188 kubelet[2759]: E0307 01:15:05.596127 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:15:07.591399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664856180.mount: Deactivated successfully. Mar 7 01:15:07.596328 kubelet[2759]: E0307 01:15:07.596163 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:15:07.632495 containerd[1594]: time="2026-03-07T01:15:07.632428183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:07.634544 containerd[1594]: time="2026-03-07T01:15:07.634294494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:15:07.636066 containerd[1594]: time="2026-03-07T01:15:07.635981573Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:07.641105 containerd[1594]: time="2026-03-07T01:15:07.641015502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:07.642490 containerd[1594]: time="2026-03-07T01:15:07.642167114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.848402774s" Mar 7 01:15:07.642490 containerd[1594]: time="2026-03-07T01:15:07.642216161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:15:07.649019 containerd[1594]: time="2026-03-07T01:15:07.648964395Z" level=info msg="CreateContainer within sandbox \"544bf11c1d6c30c8c6a52ce1db4d9fa311f4feed68dda9e9aab0127852ea66b0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:15:07.678898 containerd[1594]: time="2026-03-07T01:15:07.678839819Z" level=info msg="CreateContainer within sandbox \"544bf11c1d6c30c8c6a52ce1db4d9fa311f4feed68dda9e9aab0127852ea66b0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"249b366394dddc7715fdecafd087636ea111fdccb459abd452147ea3aed46cfc\"" Mar 7 01:15:07.680231 containerd[1594]: time="2026-03-07T01:15:07.680085015Z" level=info msg="StartContainer for \"249b366394dddc7715fdecafd087636ea111fdccb459abd452147ea3aed46cfc\"" Mar 7 01:15:07.782474 containerd[1594]: time="2026-03-07T01:15:07.782356637Z" level=info msg="StartContainer for \"249b366394dddc7715fdecafd087636ea111fdccb459abd452147ea3aed46cfc\" returns successfully" Mar 7 01:15:08.591538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-249b366394dddc7715fdecafd087636ea111fdccb459abd452147ea3aed46cfc-rootfs.mount: Deactivated successfully. Mar 7 01:15:09.471306 containerd[1594]: time="2026-03-07T01:15:09.471191318Z" level=info msg="shim disconnected" id=249b366394dddc7715fdecafd087636ea111fdccb459abd452147ea3aed46cfc namespace=k8s.io Mar 7 01:15:09.471306 containerd[1594]: time="2026-03-07T01:15:09.471280308Z" level=warning msg="cleaning up after shim disconnected" id=249b366394dddc7715fdecafd087636ea111fdccb459abd452147ea3aed46cfc namespace=k8s.io Mar 7 01:15:09.471306 containerd[1594]: time="2026-03-07T01:15:09.471296583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:15:09.595909 kubelet[2759]: E0307 01:15:09.595825 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:15:09.832080 containerd[1594]: time="2026-03-07T01:15:09.831603996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:15:10.489029 kubelet[2759]: I0307 01:15:10.488803 2759 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:15:11.595690 kubelet[2759]: E0307 01:15:11.595623 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:15:13.173971 containerd[1594]: time="2026-03-07T01:15:13.173891477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:13.175395 containerd[1594]: time="2026-03-07T01:15:13.175302876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:15:13.176523 containerd[1594]: time="2026-03-07T01:15:13.176467953Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:13.179706 containerd[1594]: time="2026-03-07T01:15:13.179640566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:13.181884 containerd[1594]: time="2026-03-07T01:15:13.181023912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.34936565s" Mar 7 01:15:13.181884 containerd[1594]: time="2026-03-07T01:15:13.181068215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:15:13.186631 containerd[1594]: time="2026-03-07T01:15:13.186447146Z" level=info msg="CreateContainer within sandbox \"544bf11c1d6c30c8c6a52ce1db4d9fa311f4feed68dda9e9aab0127852ea66b0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:15:13.206411 containerd[1594]: time="2026-03-07T01:15:13.205676414Z" level=info msg="CreateContainer within sandbox \"544bf11c1d6c30c8c6a52ce1db4d9fa311f4feed68dda9e9aab0127852ea66b0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9fd680a688a6bd1c59b2106d81459b1be27327d86d8ab9cdeb93f9a5490eabc8\"" Mar 7 01:15:13.207951 containerd[1594]: time="2026-03-07T01:15:13.207627934Z" level=info msg="StartContainer for \"9fd680a688a6bd1c59b2106d81459b1be27327d86d8ab9cdeb93f9a5490eabc8\"" Mar 7 01:15:13.296452 containerd[1594]: time="2026-03-07T01:15:13.296284179Z" level=info msg="StartContainer for \"9fd680a688a6bd1c59b2106d81459b1be27327d86d8ab9cdeb93f9a5490eabc8\" returns successfully" Mar 7 01:15:13.596684 kubelet[2759]: E0307 01:15:13.596425 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:15:14.304505 containerd[1594]: time="2026-03-07T01:15:14.304184864Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:15:14.342897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fd680a688a6bd1c59b2106d81459b1be27327d86d8ab9cdeb93f9a5490eabc8-rootfs.mount: Deactivated successfully. Mar 7 01:15:14.364611 kubelet[2759]: I0307 01:15:14.363521 2759 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:15:14.664171 kubelet[2759]: I0307 01:15:14.663986 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25502944-3541-4549-a9a1-bb47aa3bb3f0-config-volume\") pod \"coredns-674b8bbfcf-6mtnf\" (UID: \"25502944-3541-4549-a9a1-bb47aa3bb3f0\") " pod="kube-system/coredns-674b8bbfcf-6mtnf" Mar 7 01:15:14.664171 kubelet[2759]: I0307 01:15:14.664045 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mkwk\" (UniqueName: \"kubernetes.io/projected/25502944-3541-4549-a9a1-bb47aa3bb3f0-kube-api-access-6mkwk\") pod \"coredns-674b8bbfcf-6mtnf\" (UID: \"25502944-3541-4549-a9a1-bb47aa3bb3f0\") " pod="kube-system/coredns-674b8bbfcf-6mtnf" Mar 7 01:15:14.864876 kubelet[2759]: I0307 01:15:14.864803 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b92beb4-baab-4ede-a5b0-75188956c922-config-volume\") pod \"coredns-674b8bbfcf-v6pgz\" (UID: \"9b92beb4-baab-4ede-a5b0-75188956c922\") " pod="kube-system/coredns-674b8bbfcf-v6pgz" Mar 7 01:15:14.864876 kubelet[2759]: I0307 01:15:14.864874 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7glg\" (UniqueName: \"kubernetes.io/projected/9b92beb4-baab-4ede-a5b0-75188956c922-kube-api-access-s7glg\") pod \"coredns-674b8bbfcf-v6pgz\" (UID: \"9b92beb4-baab-4ede-a5b0-75188956c922\") " pod="kube-system/coredns-674b8bbfcf-v6pgz" Mar 7 01:15:14.901570 containerd[1594]: time="2026-03-07T01:15:14.901328254Z" level=info msg="shim disconnected" id=9fd680a688a6bd1c59b2106d81459b1be27327d86d8ab9cdeb93f9a5490eabc8 namespace=k8s.io Mar 7 01:15:14.903870 containerd[1594]: time="2026-03-07T01:15:14.901531362Z" level=warning msg="cleaning up after shim disconnected" id=9fd680a688a6bd1c59b2106d81459b1be27327d86d8ab9cdeb93f9a5490eabc8 namespace=k8s.io Mar 7 01:15:14.903870 containerd[1594]: time="2026-03-07T01:15:14.901679342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:15:14.942428 containerd[1594]: time="2026-03-07T01:15:14.942135667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mtnf,Uid:25502944-3541-4549-a9a1-bb47aa3bb3f0,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:14.966189 kubelet[2759]: I0307 01:15:14.966067 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a-calico-apiserver-certs\") pod \"calico-apiserver-dff69f69f-wmcc9\" (UID: \"fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a\") " pod="calico-system/calico-apiserver-dff69f69f-wmcc9" Mar 7 01:15:14.966834 kubelet[2759]: I0307 01:15:14.966788 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ffh\" (UniqueName: \"kubernetes.io/projected/67ade84f-aa00-4fe6-9543-b7e8369907a3-kube-api-access-n7ffh\") pod \"calico-kube-controllers-74586d586-nnrsd\" (UID: \"67ade84f-aa00-4fe6-9543-b7e8369907a3\") " pod="calico-system/calico-kube-controllers-74586d586-nnrsd" Mar 7 01:15:14.967221 kubelet[2759]: I0307 01:15:14.967088 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0772850a-c739-466a-aa06-6d8eb68ff187-goldmane-key-pair\") pod \"goldmane-5b85766d88-2c4qk\" (UID: \"0772850a-c739-466a-aa06-6d8eb68ff187\") " pod="calico-system/goldmane-5b85766d88-2c4qk" Mar 7 01:15:14.967221 kubelet[2759]: I0307 01:15:14.967161 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4b670b8-f0de-4b5d-bfd0-52982198657f-nginx-config\") pod \"whisker-6f8dfcdf94-vxblc\" (UID: \"a4b670b8-f0de-4b5d-bfd0-52982198657f\") " pod="calico-system/whisker-6f8dfcdf94-vxblc" Mar 7 01:15:14.968320 kubelet[2759]: I0307 01:15:14.967191 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4b670b8-f0de-4b5d-bfd0-52982198657f-whisker-backend-key-pair\") pod \"whisker-6f8dfcdf94-vxblc\" (UID: \"a4b670b8-f0de-4b5d-bfd0-52982198657f\") " pod="calico-system/whisker-6f8dfcdf94-vxblc" Mar 7 01:15:14.969090 kubelet[2759]: I0307 01:15:14.969063 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2959f216-b13f-411b-bdca-1a485a69cf02-calico-apiserver-certs\") pod \"calico-apiserver-dff69f69f-t24f6\" (UID: \"2959f216-b13f-411b-bdca-1a485a69cf02\") " pod="calico-system/calico-apiserver-dff69f69f-t24f6" Mar 7 01:15:14.970272 kubelet[2759]: I0307 01:15:14.970244 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddr8g\" (UniqueName: \"kubernetes.io/projected/2959f216-b13f-411b-bdca-1a485a69cf02-kube-api-access-ddr8g\") pod \"calico-apiserver-dff69f69f-t24f6\" (UID: \"2959f216-b13f-411b-bdca-1a485a69cf02\") " pod="calico-system/calico-apiserver-dff69f69f-t24f6" Mar 7 01:15:14.972378 kubelet[2759]: I0307 01:15:14.970479 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67ade84f-aa00-4fe6-9543-b7e8369907a3-tigera-ca-bundle\") pod \"calico-kube-controllers-74586d586-nnrsd\" (UID: \"67ade84f-aa00-4fe6-9543-b7e8369907a3\") " pod="calico-system/calico-kube-controllers-74586d586-nnrsd" Mar 7 01:15:14.972378 kubelet[2759]: I0307 01:15:14.970526 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjbk\" (UniqueName: \"kubernetes.io/projected/0772850a-c739-466a-aa06-6d8eb68ff187-kube-api-access-5pjbk\") pod \"goldmane-5b85766d88-2c4qk\" (UID: \"0772850a-c739-466a-aa06-6d8eb68ff187\") " pod="calico-system/goldmane-5b85766d88-2c4qk" Mar 7 01:15:14.972378 kubelet[2759]: I0307 01:15:14.970578 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbgtb\" (UniqueName: \"kubernetes.io/projected/fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a-kube-api-access-nbgtb\") pod \"calico-apiserver-dff69f69f-wmcc9\" (UID: \"fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a\") " pod="calico-system/calico-apiserver-dff69f69f-wmcc9" Mar 7 01:15:14.972378 kubelet[2759]: I0307 01:15:14.970608 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4b670b8-f0de-4b5d-bfd0-52982198657f-whisker-ca-bundle\") pod \"whisker-6f8dfcdf94-vxblc\" (UID: \"a4b670b8-f0de-4b5d-bfd0-52982198657f\") " pod="calico-system/whisker-6f8dfcdf94-vxblc" Mar 7 01:15:14.972378 kubelet[2759]: I0307 01:15:14.970640 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97nn2\" (UniqueName: \"kubernetes.io/projected/a4b670b8-f0de-4b5d-bfd0-52982198657f-kube-api-access-97nn2\") pod \"whisker-6f8dfcdf94-vxblc\" (UID: \"a4b670b8-f0de-4b5d-bfd0-52982198657f\") " pod="calico-system/whisker-6f8dfcdf94-vxblc" Mar 7 01:15:14.973170 kubelet[2759]: I0307 01:15:14.970690 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0772850a-c739-466a-aa06-6d8eb68ff187-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-2c4qk\" (UID: \"0772850a-c739-466a-aa06-6d8eb68ff187\") " pod="calico-system/goldmane-5b85766d88-2c4qk" Mar 7 01:15:14.973170 kubelet[2759]: I0307 01:15:14.970722 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0772850a-c739-466a-aa06-6d8eb68ff187-config\") pod \"goldmane-5b85766d88-2c4qk\" (UID: \"0772850a-c739-466a-aa06-6d8eb68ff187\") " pod="calico-system/goldmane-5b85766d88-2c4qk" Mar 7 01:15:15.092992 containerd[1594]: time="2026-03-07T01:15:15.092042897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v6pgz,Uid:9b92beb4-baab-4ede-a5b0-75188956c922,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:15.145073 containerd[1594]: time="2026-03-07T01:15:15.144493617Z" level=error msg="Failed to destroy network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.145073 containerd[1594]: time="2026-03-07T01:15:15.144916413Z" level=error msg="encountered an error cleaning up failed sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.145073 containerd[1594]: time="2026-03-07T01:15:15.145001855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mtnf,Uid:25502944-3541-4549-a9a1-bb47aa3bb3f0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.146765 kubelet[2759]: E0307 01:15:15.146700 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.146899 kubelet[2759]: E0307 01:15:15.146781 2759 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6mtnf" Mar 7 01:15:15.146899 kubelet[2759]: E0307 01:15:15.146812 2759 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6mtnf" Mar 7 01:15:15.147015 kubelet[2759]: E0307 01:15:15.146888 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6mtnf_kube-system(25502944-3541-4549-a9a1-bb47aa3bb3f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6mtnf_kube-system(25502944-3541-4549-a9a1-bb47aa3bb3f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6mtnf" podUID="25502944-3541-4549-a9a1-bb47aa3bb3f0" Mar 7 01:15:15.209349 containerd[1594]: time="2026-03-07T01:15:15.208734070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff69f69f-wmcc9,Uid:fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a,Namespace:calico-system,Attempt:0,}" Mar 7 01:15:15.219310 containerd[1594]: time="2026-03-07T01:15:15.219247428Z" level=error msg="Failed to destroy network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.219784 containerd[1594]: time="2026-03-07T01:15:15.219726094Z" level=error msg="encountered an error cleaning up failed sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.219933 containerd[1594]: time="2026-03-07T01:15:15.219797153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v6pgz,Uid:9b92beb4-baab-4ede-a5b0-75188956c922,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.220142 kubelet[2759]: E0307 01:15:15.220070 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.220279 kubelet[2759]: E0307 01:15:15.220159 2759 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v6pgz" Mar 7 01:15:15.220279 kubelet[2759]: E0307 01:15:15.220236 2759 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v6pgz" Mar 7 01:15:15.220426 kubelet[2759]: E0307 01:15:15.220312 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v6pgz_kube-system(9b92beb4-baab-4ede-a5b0-75188956c922)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v6pgz_kube-system(9b92beb4-baab-4ede-a5b0-75188956c922)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v6pgz" podUID="9b92beb4-baab-4ede-a5b0-75188956c922" Mar 7 01:15:15.230940 containerd[1594]: time="2026-03-07T01:15:15.230616533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff69f69f-t24f6,Uid:2959f216-b13f-411b-bdca-1a485a69cf02,Namespace:calico-system,Attempt:0,}" Mar 7 01:15:15.240827 containerd[1594]: time="2026-03-07T01:15:15.240508370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-2c4qk,Uid:0772850a-c739-466a-aa06-6d8eb68ff187,Namespace:calico-system,Attempt:0,}" Mar 7 01:15:15.253325 containerd[1594]: time="2026-03-07T01:15:15.253267159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f8dfcdf94-vxblc,Uid:a4b670b8-f0de-4b5d-bfd0-52982198657f,Namespace:calico-system,Attempt:0,}" Mar 7 01:15:15.257456 containerd[1594]: time="2026-03-07T01:15:15.257401067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74586d586-nnrsd,Uid:67ade84f-aa00-4fe6-9543-b7e8369907a3,Namespace:calico-system,Attempt:0,}" Mar 7 01:15:15.406811 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07-shm.mount: Deactivated successfully. Mar 7 01:15:15.466222 containerd[1594]: time="2026-03-07T01:15:15.464652957Z" level=error msg="Failed to destroy network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.466222 containerd[1594]: time="2026-03-07T01:15:15.465183610Z" level=error msg="encountered an error cleaning up failed sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.466222 containerd[1594]: time="2026-03-07T01:15:15.465250151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff69f69f-wmcc9,Uid:fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.472152 kubelet[2759]: E0307 01:15:15.469552 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.472152 kubelet[2759]: E0307 01:15:15.469658 2759 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dff69f69f-wmcc9" Mar 7 01:15:15.472152 kubelet[2759]: E0307 01:15:15.469718 2759 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dff69f69f-wmcc9" Mar 7 01:15:15.472749 kubelet[2759]: E0307 01:15:15.469847 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dff69f69f-wmcc9_calico-system(fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dff69f69f-wmcc9_calico-system(fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dff69f69f-wmcc9" podUID="fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a" Mar 7 01:15:15.475795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a-shm.mount: Deactivated successfully. Mar 7 01:15:15.522563 containerd[1594]: time="2026-03-07T01:15:15.522500312Z" level=error msg="Failed to destroy network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.523313 containerd[1594]: time="2026-03-07T01:15:15.523172965Z" level=error msg="encountered an error cleaning up failed sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.523313 containerd[1594]: time="2026-03-07T01:15:15.523247509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff69f69f-t24f6,Uid:2959f216-b13f-411b-bdca-1a485a69cf02,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.525269 kubelet[2759]: E0307 01:15:15.523657 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.525269 kubelet[2759]: E0307 01:15:15.523732 2759 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dff69f69f-t24f6" Mar 7 01:15:15.525269 kubelet[2759]: E0307 01:15:15.523766 2759 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-dff69f69f-t24f6" Mar 7 01:15:15.525516 kubelet[2759]: E0307 01:15:15.523842 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dff69f69f-t24f6_calico-system(2959f216-b13f-411b-bdca-1a485a69cf02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dff69f69f-t24f6_calico-system(2959f216-b13f-411b-bdca-1a485a69cf02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dff69f69f-t24f6" podUID="2959f216-b13f-411b-bdca-1a485a69cf02" Mar 7 01:15:15.536465 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3-shm.mount: Deactivated successfully. Mar 7 01:15:15.578174 containerd[1594]: time="2026-03-07T01:15:15.578111558Z" level=error msg="Failed to destroy network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.578900 containerd[1594]: time="2026-03-07T01:15:15.578850511Z" level=error msg="encountered an error cleaning up failed sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.579252 containerd[1594]: time="2026-03-07T01:15:15.579121466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-2c4qk,Uid:0772850a-c739-466a-aa06-6d8eb68ff187,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.579812 kubelet[2759]: E0307 01:15:15.579766 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.581271 kubelet[2759]: E0307 01:15:15.579973 2759 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-2c4qk" Mar 7 01:15:15.581271 kubelet[2759]: E0307 01:15:15.580013 2759 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-2c4qk" Mar 7 01:15:15.581271 kubelet[2759]: E0307 01:15:15.580093 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-2c4qk_calico-system(0772850a-c739-466a-aa06-6d8eb68ff187)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-2c4qk_calico-system(0772850a-c739-466a-aa06-6d8eb68ff187)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-2c4qk" podUID="0772850a-c739-466a-aa06-6d8eb68ff187" Mar 7 01:15:15.589978 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08-shm.mount: Deactivated successfully. Mar 7 01:15:15.593639 containerd[1594]: time="2026-03-07T01:15:15.593571842Z" level=error msg="Failed to destroy network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.594077 containerd[1594]: time="2026-03-07T01:15:15.594031205Z" level=error msg="encountered an error cleaning up failed sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.594195 containerd[1594]: time="2026-03-07T01:15:15.594112591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f8dfcdf94-vxblc,Uid:a4b670b8-f0de-4b5d-bfd0-52982198657f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.596339 kubelet[2759]: E0307 01:15:15.594945 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.596339 kubelet[2759]: E0307 01:15:15.595025 2759 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f8dfcdf94-vxblc" Mar 7 01:15:15.596339 kubelet[2759]: E0307 01:15:15.595060 2759 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f8dfcdf94-vxblc" Mar 7 01:15:15.596629 kubelet[2759]: E0307 01:15:15.595129 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f8dfcdf94-vxblc_calico-system(a4b670b8-f0de-4b5d-bfd0-52982198657f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f8dfcdf94-vxblc_calico-system(a4b670b8-f0de-4b5d-bfd0-52982198657f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f8dfcdf94-vxblc" podUID="a4b670b8-f0de-4b5d-bfd0-52982198657f" Mar 7 01:15:15.603176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098-shm.mount: Deactivated successfully. Mar 7 01:15:15.609935 containerd[1594]: time="2026-03-07T01:15:15.609884047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njdzt,Uid:9e38da1e-df53-4611-ac35-2d4dd975c9f5,Namespace:calico-system,Attempt:0,}" Mar 7 01:15:15.610908 containerd[1594]: time="2026-03-07T01:15:15.610858454Z" level=error msg="Failed to destroy network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.611544 containerd[1594]: time="2026-03-07T01:15:15.611479957Z" level=error msg="encountered an error cleaning up failed sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.612340 containerd[1594]: time="2026-03-07T01:15:15.611557678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74586d586-nnrsd,Uid:67ade84f-aa00-4fe6-9543-b7e8369907a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.612499 kubelet[2759]: E0307 01:15:15.611853 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.612499 kubelet[2759]: E0307 01:15:15.611915 2759 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74586d586-nnrsd" Mar 7 01:15:15.612499 kubelet[2759]: E0307 01:15:15.611951 2759 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74586d586-nnrsd" Mar 7 01:15:15.612747 kubelet[2759]: E0307 01:15:15.612027 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74586d586-nnrsd_calico-system(67ade84f-aa00-4fe6-9543-b7e8369907a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74586d586-nnrsd_calico-system(67ade84f-aa00-4fe6-9543-b7e8369907a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74586d586-nnrsd" podUID="67ade84f-aa00-4fe6-9543-b7e8369907a3" Mar 7 01:15:15.696267 containerd[1594]: time="2026-03-07T01:15:15.696193787Z" level=error msg="Failed to destroy network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.696779 containerd[1594]: time="2026-03-07T01:15:15.696729868Z" level=error msg="encountered an error cleaning up failed sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.696982 containerd[1594]: time="2026-03-07T01:15:15.696807108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njdzt,Uid:9e38da1e-df53-4611-ac35-2d4dd975c9f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.697210 kubelet[2759]: E0307 01:15:15.697142 2759 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:15.697812 kubelet[2759]: E0307 01:15:15.697235 2759 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njdzt" Mar 7 01:15:15.697812 kubelet[2759]: E0307 01:15:15.697267 2759 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njdzt" Mar 7 01:15:15.697812 kubelet[2759]: E0307 01:15:15.697408 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-njdzt_calico-system(9e38da1e-df53-4611-ac35-2d4dd975c9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-njdzt_calico-system(9e38da1e-df53-4611-ac35-2d4dd975c9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:15:15.852528 kubelet[2759]: I0307 01:15:15.852209 2759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:15.860185 containerd[1594]: time="2026-03-07T01:15:15.860111899Z" level=info msg="StopPodSandbox for \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\"" Mar 7 01:15:15.860944 containerd[1594]: time="2026-03-07T01:15:15.860704299Z" level=info msg="Ensure that sandbox 9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098 in task-service has been cleanup successfully" Mar 7 01:15:15.864096 kubelet[2759]: I0307 01:15:15.864073 2759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:15.868215 containerd[1594]: time="2026-03-07T01:15:15.868152405Z" level=info msg="StopPodSandbox for \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\"" Mar 7 01:15:15.871455 containerd[1594]: time="2026-03-07T01:15:15.869385429Z" level=info msg="Ensure that sandbox e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea in task-service has been cleanup successfully" Mar 7 01:15:15.877047 kubelet[2759]: I0307 01:15:15.876839 2759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:15.878750 containerd[1594]: time="2026-03-07T01:15:15.878693063Z" level=info msg="StopPodSandbox for \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\"" Mar 7 01:15:15.879071 containerd[1594]: time="2026-03-07T01:15:15.878960857Z" level=info msg="Ensure that sandbox 537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07 in task-service has been cleanup successfully" Mar 7 01:15:15.902031 kubelet[2759]: I0307 01:15:15.901994 2759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:15.905888 containerd[1594]: time="2026-03-07T01:15:15.905751513Z" level=info msg="StopPodSandbox for \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\"" Mar 7 01:15:15.911628 containerd[1594]: time="2026-03-07T01:15:15.911582314Z" level=info msg="Ensure that sandbox 75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189 in task-service has been cleanup successfully" Mar 7 01:15:15.929518 kubelet[2759]: I0307 01:15:15.929397 2759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:15.936147 containerd[1594]: time="2026-03-07T01:15:15.935694117Z" level=info msg="CreateContainer within sandbox \"544bf11c1d6c30c8c6a52ce1db4d9fa311f4feed68dda9e9aab0127852ea66b0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:15:15.936147 containerd[1594]: time="2026-03-07T01:15:15.935800653Z" level=info msg="StopPodSandbox for \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\"" Mar 7 01:15:15.937450 containerd[1594]: time="2026-03-07T01:15:15.936234124Z" level=info msg="Ensure that sandbox 1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7 in task-service has been cleanup successfully" Mar 7 01:15:15.939034 kubelet[2759]: I0307 01:15:15.939003 2759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:15.942114 kubelet[2759]: I0307 01:15:15.942090 2759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:15.944766 containerd[1594]: time="2026-03-07T01:15:15.944731319Z" level=info msg="StopPodSandbox for \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\"" Mar 7 01:15:15.949520 containerd[1594]: time="2026-03-07T01:15:15.949485341Z" level=info msg="Ensure that sandbox d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a in task-service has been cleanup successfully" Mar 7 01:15:15.950181 containerd[1594]: time="2026-03-07T01:15:15.945104305Z" level=info msg="StopPodSandbox for \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\"" Mar 7 01:15:15.951743 containerd[1594]: time="2026-03-07T01:15:15.951464763Z" level=info msg="Ensure that sandbox 5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3 in task-service has been cleanup successfully" Mar 7 01:15:15.970582 kubelet[2759]: I0307 01:15:15.969500 2759 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:15.973920 containerd[1594]: time="2026-03-07T01:15:15.973853774Z" level=info msg="StopPodSandbox for \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\"" Mar 7 01:15:15.979923 containerd[1594]: time="2026-03-07T01:15:15.979879139Z" level=info msg="Ensure that sandbox 52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08 in task-service has been cleanup successfully" Mar 7 01:15:16.027407 containerd[1594]: time="2026-03-07T01:15:16.027309349Z" level=info msg="CreateContainer within sandbox \"544bf11c1d6c30c8c6a52ce1db4d9fa311f4feed68dda9e9aab0127852ea66b0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0b16b7437c81e421311172c04906f4f222689ed78c38e368db58ddc4c21821dc\"" Mar 7 01:15:16.028843 containerd[1594]: time="2026-03-07T01:15:16.028713061Z" level=info msg="StartContainer for \"0b16b7437c81e421311172c04906f4f222689ed78c38e368db58ddc4c21821dc\"" Mar 7 01:15:16.049661 containerd[1594]: time="2026-03-07T01:15:16.049594533Z" level=error msg="StopPodSandbox for \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\" failed" error="failed to destroy network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:16.050628 kubelet[2759]: E0307 01:15:16.050310 2759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:16.050628 kubelet[2759]: E0307 01:15:16.050435 2759 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098"} Mar 7 01:15:16.050628 kubelet[2759]: E0307 01:15:16.050513 2759 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4b670b8-f0de-4b5d-bfd0-52982198657f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:15:16.050628 kubelet[2759]: E0307 01:15:16.050558 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4b670b8-f0de-4b5d-bfd0-52982198657f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f8dfcdf94-vxblc" podUID="a4b670b8-f0de-4b5d-bfd0-52982198657f" Mar 7 01:15:16.082733 containerd[1594]: time="2026-03-07T01:15:16.082668769Z" level=error msg="StopPodSandbox for \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\" failed" error="failed to destroy network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:16.083248 kubelet[2759]: E0307 01:15:16.083204 2759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:16.083989 kubelet[2759]: E0307 01:15:16.083805 2759 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea"} Mar 7 01:15:16.083989 kubelet[2759]: E0307 01:15:16.083875 2759 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b92beb4-baab-4ede-a5b0-75188956c922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:15:16.083989 kubelet[2759]: E0307 01:15:16.083913 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b92beb4-baab-4ede-a5b0-75188956c922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v6pgz" podUID="9b92beb4-baab-4ede-a5b0-75188956c922" Mar 7 01:15:16.111882 containerd[1594]: time="2026-03-07T01:15:16.111737450Z" level=error msg="StopPodSandbox for \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\" failed" error="failed to destroy network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:16.112669 kubelet[2759]: E0307 01:15:16.112411 2759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:16.112669 kubelet[2759]: E0307 01:15:16.112493 2759 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07"} Mar 7 01:15:16.112669 kubelet[2759]: E0307 01:15:16.112567 2759 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25502944-3541-4549-a9a1-bb47aa3bb3f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:15:16.112669 kubelet[2759]: E0307 01:15:16.112607 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25502944-3541-4549-a9a1-bb47aa3bb3f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6mtnf" podUID="25502944-3541-4549-a9a1-bb47aa3bb3f0" Mar 7 01:15:16.162795 containerd[1594]: time="2026-03-07T01:15:16.161501473Z" level=error msg="StopPodSandbox for \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\" failed" error="failed to destroy network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:16.163012 kubelet[2759]: E0307 01:15:16.162539 2759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:16.163012 kubelet[2759]: E0307 01:15:16.162619 2759 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189"} Mar 7 01:15:16.163012 kubelet[2759]: E0307 01:15:16.162671 2759 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e38da1e-df53-4611-ac35-2d4dd975c9f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:15:16.163012 kubelet[2759]: E0307 01:15:16.162714 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e38da1e-df53-4611-ac35-2d4dd975c9f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njdzt" podUID="9e38da1e-df53-4611-ac35-2d4dd975c9f5" Mar 7 01:15:16.189231 containerd[1594]: time="2026-03-07T01:15:16.188811288Z" level=error msg="StopPodSandbox for \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\" failed" error="failed to destroy network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:16.189937 kubelet[2759]: E0307 01:15:16.189575 2759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:16.189937 kubelet[2759]: E0307 01:15:16.189640 2759 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7"} Mar 7 01:15:16.189937 kubelet[2759]: E0307 01:15:16.189692 2759 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67ade84f-aa00-4fe6-9543-b7e8369907a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:15:16.189937 kubelet[2759]: E0307 01:15:16.189730 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67ade84f-aa00-4fe6-9543-b7e8369907a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74586d586-nnrsd" podUID="67ade84f-aa00-4fe6-9543-b7e8369907a3" Mar 7 01:15:16.210092 containerd[1594]: time="2026-03-07T01:15:16.208637154Z" level=error msg="StopPodSandbox for \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\" failed" error="failed to destroy network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:16.212782 kubelet[2759]: E0307 01:15:16.212728 2759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:16.213181 kubelet[2759]: E0307 01:15:16.213145 2759 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a"} Mar 7 01:15:16.213620 kubelet[2759]: E0307 01:15:16.213349 2759 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:15:16.213620 kubelet[2759]: E0307 01:15:16.213500 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dff69f69f-wmcc9" podUID="fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a" Mar 7 01:15:16.215393 containerd[1594]: time="2026-03-07T01:15:16.214466188Z" level=error msg="StopPodSandbox for \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\" failed" error="failed to destroy network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:16.215393 containerd[1594]: time="2026-03-07T01:15:16.214722020Z" level=error msg="StopPodSandbox for \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\" failed" error="failed to destroy network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:15:16.215557 kubelet[2759]: E0307 01:15:16.214989 2759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:16.215557 kubelet[2759]: E0307 01:15:16.215032 2759 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3"} Mar 7 01:15:16.215557 kubelet[2759]: E0307 01:15:16.215077 2759 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2959f216-b13f-411b-bdca-1a485a69cf02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:15:16.215557 kubelet[2759]: E0307 01:15:16.215111 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2959f216-b13f-411b-bdca-1a485a69cf02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-dff69f69f-t24f6" podUID="2959f216-b13f-411b-bdca-1a485a69cf02" Mar 7 01:15:16.215847 kubelet[2759]: E0307 01:15:16.215155 2759 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:16.215847 kubelet[2759]: E0307 01:15:16.215189 2759 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08"} Mar 7 01:15:16.215847 kubelet[2759]: E0307 01:15:16.215218 2759 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0772850a-c739-466a-aa06-6d8eb68ff187\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:15:16.215847 kubelet[2759]: E0307 01:15:16.215253 2759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0772850a-c739-466a-aa06-6d8eb68ff187\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-2c4qk" podUID="0772850a-c739-466a-aa06-6d8eb68ff187" Mar 7 01:15:16.229092 containerd[1594]: time="2026-03-07T01:15:16.229037496Z" level=info msg="StartContainer for \"0b16b7437c81e421311172c04906f4f222689ed78c38e368db58ddc4c21821dc\" returns successfully" Mar 7 01:15:16.362726 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7-shm.mount: Deactivated successfully. Mar 7 01:15:16.976700 containerd[1594]: time="2026-03-07T01:15:16.976267397Z" level=info msg="StopPodSandbox for \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\"" Mar 7 01:15:17.046924 kubelet[2759]: I0307 01:15:17.046139 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-njjw5" podStartSLOduration=5.147333558 podStartE2EDuration="22.04610864s" podCreationTimestamp="2026-03-07 01:14:55 +0000 UTC" firstStartedPulling="2026-03-07 01:14:56.283776218 +0000 UTC m=+19.876148650" lastFinishedPulling="2026-03-07 01:15:13.182551293 +0000 UTC m=+36.774923732" observedRunningTime="2026-03-07 01:15:17.008426555 +0000 UTC m=+40.600799016" watchObservedRunningTime="2026-03-07 01:15:17.04610864 +0000 UTC m=+40.638481099" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.045 [INFO][4047] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.045 [INFO][4047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" iface="eth0" netns="/var/run/netns/cni-bbb17680-0266-eb76-1b6a-c658bdee6cd5" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.046 [INFO][4047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" iface="eth0" netns="/var/run/netns/cni-bbb17680-0266-eb76-1b6a-c658bdee6cd5" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.047 [INFO][4047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" iface="eth0" netns="/var/run/netns/cni-bbb17680-0266-eb76-1b6a-c658bdee6cd5" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.047 [INFO][4047] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.047 [INFO][4047] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.076 [INFO][4055] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" HandleID="k8s-pod-network.9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.076 [INFO][4055] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.076 [INFO][4055] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.087 [WARNING][4055] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" HandleID="k8s-pod-network.9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.087 [INFO][4055] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" HandleID="k8s-pod-network.9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.089 [INFO][4055] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:17.094535 containerd[1594]: 2026-03-07 01:15:17.092 [INFO][4047] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:17.097144 containerd[1594]: time="2026-03-07T01:15:17.095522068Z" level=info msg="TearDown network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\" successfully" Mar 7 01:15:17.097144 containerd[1594]: time="2026-03-07T01:15:17.095562792Z" level=info msg="StopPodSandbox for \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\" returns successfully" Mar 7 01:15:17.103901 systemd[1]: run-netns-cni\x2dbbb17680\x2d0266\x2deb76\x2d1b6a\x2dc658bdee6cd5.mount: Deactivated successfully. Mar 7 01:15:17.188449 kubelet[2759]: I0307 01:15:17.188383 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4b670b8-f0de-4b5d-bfd0-52982198657f-nginx-config\") pod \"a4b670b8-f0de-4b5d-bfd0-52982198657f\" (UID: \"a4b670b8-f0de-4b5d-bfd0-52982198657f\") " Mar 7 01:15:17.188449 kubelet[2759]: I0307 01:15:17.188449 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4b670b8-f0de-4b5d-bfd0-52982198657f-whisker-ca-bundle\") pod \"a4b670b8-f0de-4b5d-bfd0-52982198657f\" (UID: \"a4b670b8-f0de-4b5d-bfd0-52982198657f\") " Mar 7 01:15:17.188734 kubelet[2759]: I0307 01:15:17.188502 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4b670b8-f0de-4b5d-bfd0-52982198657f-whisker-backend-key-pair\") pod \"a4b670b8-f0de-4b5d-bfd0-52982198657f\" (UID: \"a4b670b8-f0de-4b5d-bfd0-52982198657f\") " Mar 7 01:15:17.188734 kubelet[2759]: I0307 01:15:17.188534 2759 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97nn2\" (UniqueName: \"kubernetes.io/projected/a4b670b8-f0de-4b5d-bfd0-52982198657f-kube-api-access-97nn2\") pod \"a4b670b8-f0de-4b5d-bfd0-52982198657f\" (UID: \"a4b670b8-f0de-4b5d-bfd0-52982198657f\") " Mar 7 01:15:17.190200 kubelet[2759]: I0307 01:15:17.189340 2759 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b670b8-f0de-4b5d-bfd0-52982198657f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a4b670b8-f0de-4b5d-bfd0-52982198657f" (UID: "a4b670b8-f0de-4b5d-bfd0-52982198657f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:15:17.190200 kubelet[2759]: I0307 01:15:17.189824 2759 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b670b8-f0de-4b5d-bfd0-52982198657f-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "a4b670b8-f0de-4b5d-bfd0-52982198657f" (UID: "a4b670b8-f0de-4b5d-bfd0-52982198657f"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:15:17.194655 kubelet[2759]: I0307 01:15:17.194566 2759 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b670b8-f0de-4b5d-bfd0-52982198657f-kube-api-access-97nn2" (OuterVolumeSpecName: "kube-api-access-97nn2") pod "a4b670b8-f0de-4b5d-bfd0-52982198657f" (UID: "a4b670b8-f0de-4b5d-bfd0-52982198657f"). InnerVolumeSpecName "kube-api-access-97nn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:15:17.196814 kubelet[2759]: I0307 01:15:17.196784 2759 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b670b8-f0de-4b5d-bfd0-52982198657f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a4b670b8-f0de-4b5d-bfd0-52982198657f" (UID: "a4b670b8-f0de-4b5d-bfd0-52982198657f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:15:17.200457 systemd[1]: var-lib-kubelet-pods-a4b670b8\x2df0de\x2d4b5d\x2dbfd0\x2d52982198657f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d97nn2.mount: Deactivated successfully. Mar 7 01:15:17.206209 systemd[1]: var-lib-kubelet-pods-a4b670b8\x2df0de\x2d4b5d\x2dbfd0\x2d52982198657f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:15:17.289698 kubelet[2759]: I0307 01:15:17.289530 2759 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4b670b8-f0de-4b5d-bfd0-52982198657f-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" DevicePath \"\"" Mar 7 01:15:17.289698 kubelet[2759]: I0307 01:15:17.289587 2759 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-97nn2\" (UniqueName: \"kubernetes.io/projected/a4b670b8-f0de-4b5d-bfd0-52982198657f-kube-api-access-97nn2\") on node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" DevicePath \"\"" Mar 7 01:15:17.289698 kubelet[2759]: I0307 01:15:17.289605 2759 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4b670b8-f0de-4b5d-bfd0-52982198657f-nginx-config\") on node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" DevicePath \"\"" Mar 7 01:15:17.289698 kubelet[2759]: I0307 01:15:17.289621 2759 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4b670b8-f0de-4b5d-bfd0-52982198657f-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4\" DevicePath \"\"" Mar 7 01:15:18.095258 kubelet[2759]: I0307 01:15:18.095141 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6eba6ad4-a1c1-4fae-934d-613275eaae4d-whisker-backend-key-pair\") pod \"whisker-679754d4bc-9vbcw\" (UID: \"6eba6ad4-a1c1-4fae-934d-613275eaae4d\") " pod="calico-system/whisker-679754d4bc-9vbcw" Mar 7 01:15:18.095258 kubelet[2759]: I0307 01:15:18.095216 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6eba6ad4-a1c1-4fae-934d-613275eaae4d-nginx-config\") pod \"whisker-679754d4bc-9vbcw\" (UID: \"6eba6ad4-a1c1-4fae-934d-613275eaae4d\") " pod="calico-system/whisker-679754d4bc-9vbcw" Mar 7 01:15:18.096015 kubelet[2759]: I0307 01:15:18.095297 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tchh7\" (UniqueName: \"kubernetes.io/projected/6eba6ad4-a1c1-4fae-934d-613275eaae4d-kube-api-access-tchh7\") pod \"whisker-679754d4bc-9vbcw\" (UID: \"6eba6ad4-a1c1-4fae-934d-613275eaae4d\") " pod="calico-system/whisker-679754d4bc-9vbcw" Mar 7 01:15:18.096015 kubelet[2759]: I0307 01:15:18.095347 2759 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eba6ad4-a1c1-4fae-934d-613275eaae4d-whisker-ca-bundle\") pod \"whisker-679754d4bc-9vbcw\" (UID: \"6eba6ad4-a1c1-4fae-934d-613275eaae4d\") " pod="calico-system/whisker-679754d4bc-9vbcw" Mar 7 01:15:18.379192 containerd[1594]: time="2026-03-07T01:15:18.378331046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679754d4bc-9vbcw,Uid:6eba6ad4-a1c1-4fae-934d-613275eaae4d,Namespace:calico-system,Attempt:0,}" Mar 7 01:15:18.534628 kernel: calico-node[4156]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:15:18.606676 kubelet[2759]: I0307 01:15:18.606599 2759 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b670b8-f0de-4b5d-bfd0-52982198657f" path="/var/lib/kubelet/pods/a4b670b8-f0de-4b5d-bfd0-52982198657f/volumes" Mar 7 01:15:18.697330 systemd-networkd[1221]: calic84103eb647: Link UP Mar 7 01:15:18.700997 systemd-networkd[1221]: calic84103eb647: Gained carrier Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.529 [INFO][4173] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0 whisker-679754d4bc- calico-system 6eba6ad4-a1c1-4fae-934d-613275eaae4d 946 0 2026-03-07 01:15:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:679754d4bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4 whisker-679754d4bc-9vbcw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic84103eb647 [] [] }} ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Namespace="calico-system" Pod="whisker-679754d4bc-9vbcw" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.529 [INFO][4173] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Namespace="calico-system" Pod="whisker-679754d4bc-9vbcw" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.585 [INFO][4207] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" HandleID="k8s-pod-network.1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.607 [INFO][4207] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" HandleID="k8s-pod-network.1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdd90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", "pod":"whisker-679754d4bc-9vbcw", "timestamp":"2026-03-07 01:15:18.585823243 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001866e0)} Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.608 [INFO][4207] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.608 [INFO][4207] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.608 [INFO][4207] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.611 [INFO][4207] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.617 [INFO][4207] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.624 [INFO][4207] ipam/ipam.go 526: Trying affinity for 192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.630 [INFO][4207] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.635 [INFO][4207] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.635 [INFO][4207] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.639 [INFO][4207] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2 Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.645 [INFO][4207] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.653 [INFO][4207] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.193/26] block=192.168.88.192/26 handle="k8s-pod-network.1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.654 [INFO][4207] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.193/26] handle="k8s-pod-network.1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.654 [INFO][4207] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:18.730646 containerd[1594]: 2026-03-07 01:15:18.654 [INFO][4207] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.193/26] IPv6=[] ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" HandleID="k8s-pod-network.1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" Mar 7 01:15:18.732372 containerd[1594]: 2026-03-07 01:15:18.657 [INFO][4173] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Namespace="calico-system" Pod="whisker-679754d4bc-9vbcw" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0", GenerateName:"whisker-679754d4bc-", Namespace:"calico-system", SelfLink:"", UID:"6eba6ad4-a1c1-4fae-934d-613275eaae4d", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"679754d4bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"", Pod:"whisker-679754d4bc-9vbcw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic84103eb647", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:18.732372 containerd[1594]: 2026-03-07 01:15:18.657 [INFO][4173] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.193/32] ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Namespace="calico-system" Pod="whisker-679754d4bc-9vbcw" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" Mar 7 01:15:18.732372 containerd[1594]: 2026-03-07 01:15:18.657 [INFO][4173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic84103eb647 ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Namespace="calico-system" Pod="whisker-679754d4bc-9vbcw" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" Mar 7 01:15:18.732372 containerd[1594]: 2026-03-07 01:15:18.702 [INFO][4173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Namespace="calico-system" Pod="whisker-679754d4bc-9vbcw" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" Mar 7 01:15:18.732372 containerd[1594]: 2026-03-07 01:15:18.702 [INFO][4173] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Namespace="calico-system" Pod="whisker-679754d4bc-9vbcw" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0", GenerateName:"whisker-679754d4bc-", Namespace:"calico-system", SelfLink:"", UID:"6eba6ad4-a1c1-4fae-934d-613275eaae4d", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"679754d4bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2", Pod:"whisker-679754d4bc-9vbcw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic84103eb647", MAC:"ce:11:1f:ae:d3:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:18.732372 containerd[1594]: 2026-03-07 01:15:18.723 [INFO][4173] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2" Namespace="calico-system" Pod="whisker-679754d4bc-9vbcw" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--679754d4bc--9vbcw-eth0" Mar 7 01:15:18.771669 containerd[1594]: time="2026-03-07T01:15:18.770885143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:18.771669 containerd[1594]: time="2026-03-07T01:15:18.770967037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:18.771669 containerd[1594]: time="2026-03-07T01:15:18.770994461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:18.771669 containerd[1594]: time="2026-03-07T01:15:18.771116658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:18.906281 containerd[1594]: time="2026-03-07T01:15:18.906214534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679754d4bc-9vbcw,Uid:6eba6ad4-a1c1-4fae-934d-613275eaae4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2\"" Mar 7 01:15:18.910137 containerd[1594]: time="2026-03-07T01:15:18.909786284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:15:19.254847 systemd-networkd[1221]: vxlan.calico: Link UP Mar 7 01:15:19.254859 systemd-networkd[1221]: vxlan.calico: Gained carrier Mar 7 01:15:19.962602 containerd[1594]: time="2026-03-07T01:15:19.962468526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:19.964114 containerd[1594]: time="2026-03-07T01:15:19.963990404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:15:19.965524 containerd[1594]: time="2026-03-07T01:15:19.965454773Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:19.969092 containerd[1594]: time="2026-03-07T01:15:19.968673674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:19.969955 containerd[1594]: time="2026-03-07T01:15:19.969737046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.059898823s" Mar 7 01:15:19.969955 containerd[1594]: time="2026-03-07T01:15:19.969782924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:15:19.975083 containerd[1594]: time="2026-03-07T01:15:19.975042581Z" level=info msg="CreateContainer within sandbox \"1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:15:19.996502 containerd[1594]: time="2026-03-07T01:15:19.996435367Z" level=info msg="CreateContainer within sandbox \"1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"00bc3bb385930e927371f4cfc0fb16148b69baa4f4d3b82d59668231b07bffbb\"" Mar 7 01:15:19.998170 containerd[1594]: time="2026-03-07T01:15:19.997937234Z" level=info msg="StartContainer for \"00bc3bb385930e927371f4cfc0fb16148b69baa4f4d3b82d59668231b07bffbb\"" Mar 7 01:15:20.119025 containerd[1594]: time="2026-03-07T01:15:20.118932084Z" level=info msg="StartContainer for \"00bc3bb385930e927371f4cfc0fb16148b69baa4f4d3b82d59668231b07bffbb\" returns successfully" Mar 7 01:15:20.126976 containerd[1594]: time="2026-03-07T01:15:20.126613231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:15:20.163746 systemd-networkd[1221]: calic84103eb647: Gained IPv6LL Mar 7 01:15:20.611587 systemd-networkd[1221]: vxlan.calico: Gained IPv6LL Mar 7 01:15:21.540108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168380194.mount: Deactivated successfully. Mar 7 01:15:21.557340 containerd[1594]: time="2026-03-07T01:15:21.557270747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:21.558773 containerd[1594]: time="2026-03-07T01:15:21.558643349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:15:21.560164 containerd[1594]: time="2026-03-07T01:15:21.559895990Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:21.564011 containerd[1594]: time="2026-03-07T01:15:21.563940116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:21.566418 containerd[1594]: time="2026-03-07T01:15:21.566320930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.439648939s" Mar 7 01:15:21.566682 containerd[1594]: time="2026-03-07T01:15:21.566459428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:15:21.573070 containerd[1594]: time="2026-03-07T01:15:21.573022394Z" level=info msg="CreateContainer within sandbox \"1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:15:21.590007 containerd[1594]: time="2026-03-07T01:15:21.589954916Z" level=info msg="CreateContainer within sandbox \"1c50ddb5d493950a5af9057bb48e6be9d76425b8be36e3bdcc7a55c957f746d2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c266321161ca81c0468f1cf149bdb253fc55e648d391f93665c398853ddd6136\"" Mar 7 01:15:21.593079 containerd[1594]: time="2026-03-07T01:15:21.591783585Z" level=info msg="StartContainer for \"c266321161ca81c0468f1cf149bdb253fc55e648d391f93665c398853ddd6136\"" Mar 7 01:15:21.701633 containerd[1594]: time="2026-03-07T01:15:21.700774882Z" level=info msg="StartContainer for \"c266321161ca81c0468f1cf149bdb253fc55e648d391f93665c398853ddd6136\" returns successfully" Mar 7 01:15:22.018160 kubelet[2759]: I0307 01:15:22.018063 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-679754d4bc-9vbcw" podStartSLOduration=1.359703891 podStartE2EDuration="4.018039604s" podCreationTimestamp="2026-03-07 01:15:18 +0000 UTC" firstStartedPulling="2026-03-07 01:15:18.90916034 +0000 UTC m=+42.501532775" lastFinishedPulling="2026-03-07 01:15:21.567496042 +0000 UTC m=+45.159868488" observedRunningTime="2026-03-07 01:15:22.017333676 +0000 UTC m=+45.609706135" watchObservedRunningTime="2026-03-07 01:15:22.018039604 +0000 UTC m=+45.610412061" Mar 7 01:15:22.792550 ntpd[1540]: Listen normally on 6 vxlan.calico 192.168.88.192:123 Mar 7 01:15:22.792670 ntpd[1540]: Listen normally on 7 calic84103eb647 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 01:15:22.793242 ntpd[1540]: 7 Mar 01:15:22 ntpd[1540]: Listen normally on 6 vxlan.calico 192.168.88.192:123 Mar 7 01:15:22.793242 ntpd[1540]: 7 Mar 01:15:22 ntpd[1540]: Listen normally on 7 calic84103eb647 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 01:15:22.793242 ntpd[1540]: 7 Mar 01:15:22 ntpd[1540]: Listen normally on 8 vxlan.calico [fe80::6450:8eff:fe34:c47e%5]:123 Mar 7 01:15:22.792758 ntpd[1540]: Listen normally on 8 vxlan.calico [fe80::6450:8eff:fe34:c47e%5]:123 Mar 7 01:15:23.988825 kubelet[2759]: I0307 01:15:23.988049 2759 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:15:26.600137 containerd[1594]: time="2026-03-07T01:15:26.600081580Z" level=info msg="StopPodSandbox for \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\"" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.713 [INFO][4504] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.714 [INFO][4504] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" iface="eth0" netns="/var/run/netns/cni-ccafae6e-9725-4cc8-5e0e-a05599f675f9" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.714 [INFO][4504] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" iface="eth0" netns="/var/run/netns/cni-ccafae6e-9725-4cc8-5e0e-a05599f675f9" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.714 [INFO][4504] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" iface="eth0" netns="/var/run/netns/cni-ccafae6e-9725-4cc8-5e0e-a05599f675f9" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.714 [INFO][4504] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.714 [INFO][4504] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.754 [INFO][4512] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" HandleID="k8s-pod-network.d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.754 [INFO][4512] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.755 [INFO][4512] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.773 [WARNING][4512] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" HandleID="k8s-pod-network.d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.773 [INFO][4512] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" HandleID="k8s-pod-network.d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.776 [INFO][4512] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:26.788117 containerd[1594]: 2026-03-07 01:15:26.783 [INFO][4504] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:26.788922 containerd[1594]: time="2026-03-07T01:15:26.788499882Z" level=info msg="TearDown network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\" successfully" Mar 7 01:15:26.788922 containerd[1594]: time="2026-03-07T01:15:26.788540469Z" level=info msg="StopPodSandbox for \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\" returns successfully" Mar 7 01:15:26.793598 containerd[1594]: time="2026-03-07T01:15:26.791337510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff69f69f-wmcc9,Uid:fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a,Namespace:calico-system,Attempt:1,}" Mar 7 01:15:26.806827 systemd[1]: run-netns-cni\x2dccafae6e\x2d9725\x2d4cc8\x2d5e0e\x2da05599f675f9.mount: Deactivated successfully. Mar 7 01:15:27.119506 systemd-networkd[1221]: calie908be40a34: Link UP Mar 7 01:15:27.123756 systemd-networkd[1221]: calie908be40a34: Gained carrier Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:26.922 [INFO][4518] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0 calico-apiserver-dff69f69f- calico-system fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a 984 0 2026-03-07 01:14:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dff69f69f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4 calico-apiserver-dff69f69f-wmcc9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie908be40a34 [] [] }} ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-wmcc9" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:26.923 [INFO][4518] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-wmcc9" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.050 [INFO][4531] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" HandleID="k8s-pod-network.f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.063 [INFO][4531] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" HandleID="k8s-pod-network.f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000204350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", "pod":"calico-apiserver-dff69f69f-wmcc9", "timestamp":"2026-03-07 01:15:27.050580201 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001862c0)} Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.063 [INFO][4531] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.063 [INFO][4531] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.063 [INFO][4531] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.066 [INFO][4531] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.072 [INFO][4531] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.079 [INFO][4531] ipam/ipam.go 526: Trying affinity for 192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.081 [INFO][4531] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.084 [INFO][4531] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.084 [INFO][4531] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.086 [INFO][4531] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.095 [INFO][4531] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.105 [INFO][4531] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.194/26] block=192.168.88.192/26 handle="k8s-pod-network.f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.105 [INFO][4531] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.194/26] handle="k8s-pod-network.f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.105 [INFO][4531] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:27.159083 containerd[1594]: 2026-03-07 01:15:27.105 [INFO][4531] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.194/26] IPv6=[] ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" HandleID="k8s-pod-network.f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:27.160545 containerd[1594]: 2026-03-07 01:15:27.109 [INFO][4518] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-wmcc9" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0", GenerateName:"calico-apiserver-dff69f69f-", Namespace:"calico-system", SelfLink:"", UID:"fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff69f69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"", Pod:"calico-apiserver-dff69f69f-wmcc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie908be40a34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:27.160545 containerd[1594]: 2026-03-07 01:15:27.109 [INFO][4518] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.194/32] ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-wmcc9" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:27.160545 containerd[1594]: 2026-03-07 01:15:27.110 [INFO][4518] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie908be40a34 ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-wmcc9" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:27.160545 containerd[1594]: 2026-03-07 01:15:27.124 [INFO][4518] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-wmcc9" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:27.160545 containerd[1594]: 2026-03-07 01:15:27.126 [INFO][4518] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-wmcc9" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0", GenerateName:"calico-apiserver-dff69f69f-", Namespace:"calico-system", SelfLink:"", UID:"fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff69f69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e", Pod:"calico-apiserver-dff69f69f-wmcc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie908be40a34", MAC:"16:18:ed:77:73:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:27.160545 containerd[1594]: 2026-03-07 01:15:27.148 [INFO][4518] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-wmcc9" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:27.251598 containerd[1594]: time="2026-03-07T01:15:27.250775985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:27.251598 containerd[1594]: time="2026-03-07T01:15:27.250864898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:27.251598 containerd[1594]: time="2026-03-07T01:15:27.250891084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:27.251598 containerd[1594]: time="2026-03-07T01:15:27.251043372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:27.430022 containerd[1594]: time="2026-03-07T01:15:27.429841318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff69f69f-wmcc9,Uid:fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a,Namespace:calico-system,Attempt:1,} returns sandbox id \"f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e\"" Mar 7 01:15:27.435844 containerd[1594]: time="2026-03-07T01:15:27.435796910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:15:27.597115 containerd[1594]: time="2026-03-07T01:15:27.597060040Z" level=info msg="StopPodSandbox for \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\"" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.698 [INFO][4609] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.700 [INFO][4609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" iface="eth0" netns="/var/run/netns/cni-31a8c10d-fad1-8c4c-8673-2e200b9e1529" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.701 [INFO][4609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" iface="eth0" netns="/var/run/netns/cni-31a8c10d-fad1-8c4c-8673-2e200b9e1529" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.703 [INFO][4609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" iface="eth0" netns="/var/run/netns/cni-31a8c10d-fad1-8c4c-8673-2e200b9e1529" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.703 [INFO][4609] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.704 [INFO][4609] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.759 [INFO][4616] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" HandleID="k8s-pod-network.75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.759 [INFO][4616] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.760 [INFO][4616] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.774 [WARNING][4616] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" HandleID="k8s-pod-network.75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.774 [INFO][4616] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" HandleID="k8s-pod-network.75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.777 [INFO][4616] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:27.782610 containerd[1594]: 2026-03-07 01:15:27.780 [INFO][4609] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:27.785688 containerd[1594]: time="2026-03-07T01:15:27.782847606Z" level=info msg="TearDown network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\" successfully" Mar 7 01:15:27.785688 containerd[1594]: time="2026-03-07T01:15:27.782886339Z" level=info msg="StopPodSandbox for \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\" returns successfully" Mar 7 01:15:27.788513 containerd[1594]: time="2026-03-07T01:15:27.786103029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njdzt,Uid:9e38da1e-df53-4611-ac35-2d4dd975c9f5,Namespace:calico-system,Attempt:1,}" Mar 7 01:15:27.799348 systemd[1]: run-netns-cni\x2d31a8c10d\x2dfad1\x2d8c4c\x2d8673\x2d2e200b9e1529.mount: Deactivated successfully. Mar 7 01:15:28.048608 systemd-networkd[1221]: cali2e7e63d1b11: Link UP Mar 7 01:15:28.050819 systemd-networkd[1221]: cali2e7e63d1b11: Gained carrier Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:27.902 [INFO][4623] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0 csi-node-driver- calico-system 9e38da1e-df53-4611-ac35-2d4dd975c9f5 1000 0 2026-03-07 01:14:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4 csi-node-driver-njdzt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2e7e63d1b11 [] [] }} ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Namespace="calico-system" Pod="csi-node-driver-njdzt" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:27.902 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Namespace="calico-system" Pod="csi-node-driver-njdzt" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:27.976 [INFO][4635] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" HandleID="k8s-pod-network.927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:27.988 [INFO][4635] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" HandleID="k8s-pod-network.927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002774b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", "pod":"csi-node-driver-njdzt", "timestamp":"2026-03-07 01:15:27.976669205 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00032adc0)} Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:27.989 [INFO][4635] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:27.989 [INFO][4635] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:27.989 [INFO][4635] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:27.992 [INFO][4635] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:27.998 [INFO][4635] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.004 [INFO][4635] ipam/ipam.go 526: Trying affinity for 192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.007 [INFO][4635] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.011 [INFO][4635] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.011 [INFO][4635] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.013 [INFO][4635] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5 Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.019 [INFO][4635] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.034 [INFO][4635] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.195/26] block=192.168.88.192/26 handle="k8s-pod-network.927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.035 [INFO][4635] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.195/26] handle="k8s-pod-network.927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.035 [INFO][4635] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:28.096407 containerd[1594]: 2026-03-07 01:15:28.035 [INFO][4635] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.195/26] IPv6=[] ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" HandleID="k8s-pod-network.927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:28.103570 containerd[1594]: 2026-03-07 01:15:28.042 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Namespace="calico-system" Pod="csi-node-driver-njdzt" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e38da1e-df53-4611-ac35-2d4dd975c9f5", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"", Pod:"csi-node-driver-njdzt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e7e63d1b11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:28.103570 containerd[1594]: 2026-03-07 01:15:28.042 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.195/32] ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Namespace="calico-system" Pod="csi-node-driver-njdzt" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:28.103570 containerd[1594]: 2026-03-07 01:15:28.042 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e7e63d1b11 ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Namespace="calico-system" Pod="csi-node-driver-njdzt" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:28.103570 containerd[1594]: 2026-03-07 01:15:28.054 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Namespace="calico-system" Pod="csi-node-driver-njdzt" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:28.103570 containerd[1594]: 2026-03-07 01:15:28.055 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Namespace="calico-system" Pod="csi-node-driver-njdzt" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e38da1e-df53-4611-ac35-2d4dd975c9f5", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5", Pod:"csi-node-driver-njdzt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e7e63d1b11", MAC:"f2:79:40:69:06:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:28.103570 containerd[1594]: 2026-03-07 01:15:28.078 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5" Namespace="calico-system" Pod="csi-node-driver-njdzt" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:28.184569 containerd[1594]: time="2026-03-07T01:15:28.184387928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:28.185639 containerd[1594]: time="2026-03-07T01:15:28.184832476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:28.185793 containerd[1594]: time="2026-03-07T01:15:28.185702755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:28.186198 containerd[1594]: time="2026-03-07T01:15:28.186084740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:28.476269 containerd[1594]: time="2026-03-07T01:15:28.476220207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njdzt,Uid:9e38da1e-df53-4611-ac35-2d4dd975c9f5,Namespace:calico-system,Attempt:1,} returns sandbox id \"927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5\"" Mar 7 01:15:28.599275 containerd[1594]: time="2026-03-07T01:15:28.598564615Z" level=info msg="StopPodSandbox for \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\"" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.769 [INFO][4707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.769 [INFO][4707] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" iface="eth0" netns="/var/run/netns/cni-df3051f1-2339-36f4-7419-90b838ebb0e6" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.771 [INFO][4707] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" iface="eth0" netns="/var/run/netns/cni-df3051f1-2339-36f4-7419-90b838ebb0e6" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.772 [INFO][4707] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" iface="eth0" netns="/var/run/netns/cni-df3051f1-2339-36f4-7419-90b838ebb0e6" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.772 [INFO][4707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.772 [INFO][4707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.899 [INFO][4714] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" HandleID="k8s-pod-network.e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.905 [INFO][4714] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.906 [INFO][4714] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.933 [WARNING][4714] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" HandleID="k8s-pod-network.e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.933 [INFO][4714] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" HandleID="k8s-pod-network.e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.935 [INFO][4714] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:28.949643 containerd[1594]: 2026-03-07 01:15:28.943 [INFO][4707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:28.954860 containerd[1594]: time="2026-03-07T01:15:28.951017449Z" level=info msg="TearDown network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\" successfully" Mar 7 01:15:28.954860 containerd[1594]: time="2026-03-07T01:15:28.951065125Z" level=info msg="StopPodSandbox for \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\" returns successfully" Mar 7 01:15:28.961411 containerd[1594]: time="2026-03-07T01:15:28.959016435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v6pgz,Uid:9b92beb4-baab-4ede-a5b0-75188956c922,Namespace:kube-system,Attempt:1,}" Mar 7 01:15:28.965120 systemd[1]: run-netns-cni\x2ddf3051f1\x2d2339\x2d36f4\x2d7419\x2d90b838ebb0e6.mount: Deactivated successfully. Mar 7 01:15:29.062672 systemd-networkd[1221]: calie908be40a34: Gained IPv6LL Mar 7 01:15:29.343210 systemd-networkd[1221]: cali128b24a9ea0: Link UP Mar 7 01:15:29.343929 systemd-networkd[1221]: cali128b24a9ea0: Gained carrier Mar 7 01:15:29.384446 systemd-networkd[1221]: cali2e7e63d1b11: Gained IPv6LL Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.193 [INFO][4721] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0 coredns-674b8bbfcf- kube-system 9b92beb4-baab-4ede-a5b0-75188956c922 1008 0 2026-03-07 01:14:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4 coredns-674b8bbfcf-v6pgz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali128b24a9ea0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Namespace="kube-system" Pod="coredns-674b8bbfcf-v6pgz" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.194 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Namespace="kube-system" Pod="coredns-674b8bbfcf-v6pgz" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.261 [INFO][4736] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" HandleID="k8s-pod-network.82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.274 [INFO][4736] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" HandleID="k8s-pod-network.82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315ea0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", "pod":"coredns-674b8bbfcf-v6pgz", "timestamp":"2026-03-07 01:15:29.261860852 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112dc0)} Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.274 [INFO][4736] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.274 [INFO][4736] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.274 [INFO][4736] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.279 [INFO][4736] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.287 [INFO][4736] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.293 [INFO][4736] ipam/ipam.go 526: Trying affinity for 192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.297 [INFO][4736] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.301 [INFO][4736] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.302 [INFO][4736] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.305 [INFO][4736] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737 Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.310 [INFO][4736] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.330 [INFO][4736] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.196/26] block=192.168.88.192/26 handle="k8s-pod-network.82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.330 [INFO][4736] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.196/26] handle="k8s-pod-network.82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.330 [INFO][4736] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:29.387239 containerd[1594]: 2026-03-07 01:15:29.330 [INFO][4736] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.196/26] IPv6=[] ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" HandleID="k8s-pod-network.82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:29.393432 containerd[1594]: 2026-03-07 01:15:29.337 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Namespace="kube-system" Pod="coredns-674b8bbfcf-v6pgz" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b92beb4-baab-4ede-a5b0-75188956c922", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"", Pod:"coredns-674b8bbfcf-v6pgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali128b24a9ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:29.393432 containerd[1594]: 2026-03-07 01:15:29.337 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.196/32] ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Namespace="kube-system" Pod="coredns-674b8bbfcf-v6pgz" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:29.393432 containerd[1594]: 2026-03-07 01:15:29.337 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali128b24a9ea0 ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Namespace="kube-system" Pod="coredns-674b8bbfcf-v6pgz" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:29.393432 containerd[1594]: 2026-03-07 01:15:29.347 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Namespace="kube-system" Pod="coredns-674b8bbfcf-v6pgz" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:29.393432 containerd[1594]: 2026-03-07 01:15:29.355 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Namespace="kube-system" Pod="coredns-674b8bbfcf-v6pgz" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b92beb4-baab-4ede-a5b0-75188956c922", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737", Pod:"coredns-674b8bbfcf-v6pgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali128b24a9ea0", MAC:"4e:5c:96:80:8d:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:29.393432 containerd[1594]: 2026-03-07 01:15:29.379 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737" Namespace="kube-system" Pod="coredns-674b8bbfcf-v6pgz" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:29.505618 containerd[1594]: time="2026-03-07T01:15:29.505168102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:29.505618 containerd[1594]: time="2026-03-07T01:15:29.505262089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:29.505618 containerd[1594]: time="2026-03-07T01:15:29.505290675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:29.509792 containerd[1594]: time="2026-03-07T01:15:29.505473548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:29.602128 containerd[1594]: time="2026-03-07T01:15:29.597617723Z" level=info msg="StopPodSandbox for \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\"" Mar 7 01:15:29.807092 containerd[1594]: time="2026-03-07T01:15:29.807037536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v6pgz,Uid:9b92beb4-baab-4ede-a5b0-75188956c922,Namespace:kube-system,Attempt:1,} returns sandbox id \"82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737\"" Mar 7 01:15:29.824526 containerd[1594]: time="2026-03-07T01:15:29.823714408Z" level=info msg="CreateContainer within sandbox \"82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:15:29.860707 containerd[1594]: time="2026-03-07T01:15:29.859091344Z" level=info msg="CreateContainer within sandbox \"82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b43aeb04339cdd99111b667561ae4328d558823a3008019d38b005d77cbd7f4\"" Mar 7 01:15:29.861098 containerd[1594]: time="2026-03-07T01:15:29.860802174Z" level=info msg="StartContainer for \"6b43aeb04339cdd99111b667561ae4328d558823a3008019d38b005d77cbd7f4\"" Mar 7 01:15:30.100149 containerd[1594]: time="2026-03-07T01:15:30.098344847Z" level=info msg="StartContainer for \"6b43aeb04339cdd99111b667561ae4328d558823a3008019d38b005d77cbd7f4\" returns successfully" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:29.903 [INFO][4804] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:29.904 [INFO][4804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" iface="eth0" netns="/var/run/netns/cni-d4911a2b-a5b6-87e9-d9e9-d9464ce2fabd" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:29.905 [INFO][4804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" iface="eth0" netns="/var/run/netns/cni-d4911a2b-a5b6-87e9-d9e9-d9464ce2fabd" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:29.906 [INFO][4804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" iface="eth0" netns="/var/run/netns/cni-d4911a2b-a5b6-87e9-d9e9-d9464ce2fabd" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:29.906 [INFO][4804] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:29.906 [INFO][4804] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:30.122 [INFO][4839] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" HandleID="k8s-pod-network.537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:30.123 [INFO][4839] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:30.123 [INFO][4839] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:30.151 [WARNING][4839] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" HandleID="k8s-pod-network.537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:30.151 [INFO][4839] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" HandleID="k8s-pod-network.537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:30.158 [INFO][4839] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:30.191082 containerd[1594]: 2026-03-07 01:15:30.174 [INFO][4804] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:30.193700 containerd[1594]: time="2026-03-07T01:15:30.193528023Z" level=info msg="TearDown network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\" successfully" Mar 7 01:15:30.193700 containerd[1594]: time="2026-03-07T01:15:30.193574350Z" level=info msg="StopPodSandbox for \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\" returns successfully" Mar 7 01:15:30.200343 containerd[1594]: time="2026-03-07T01:15:30.200295076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mtnf,Uid:25502944-3541-4549-a9a1-bb47aa3bb3f0,Namespace:kube-system,Attempt:1,}" Mar 7 01:15:30.201796 systemd[1]: run-netns-cni\x2dd4911a2b\x2da5b6\x2d87e9\x2dd9e9\x2dd9464ce2fabd.mount: Deactivated successfully. Mar 7 01:15:30.587267 systemd-networkd[1221]: calia49d34998df: Link UP Mar 7 01:15:30.591598 systemd-networkd[1221]: calia49d34998df: Gained carrier Mar 7 01:15:30.621545 containerd[1594]: time="2026-03-07T01:15:30.620704976Z" level=info msg="StopPodSandbox for \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\"" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.400 [INFO][4876] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0 coredns-674b8bbfcf- kube-system 25502944-3541-4549-a9a1-bb47aa3bb3f0 1016 0 2026-03-07 01:14:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4 coredns-674b8bbfcf-6mtnf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia49d34998df [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mtnf" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.401 [INFO][4876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mtnf" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.492 [INFO][4888] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" HandleID="k8s-pod-network.90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.508 [INFO][4888] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" HandleID="k8s-pod-network.90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041dcb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", "pod":"coredns-674b8bbfcf-6mtnf", "timestamp":"2026-03-07 01:15:30.492415623 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000186c60)} Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.508 [INFO][4888] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.508 [INFO][4888] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.508 [INFO][4888] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.512 [INFO][4888] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.523 [INFO][4888] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.536 [INFO][4888] ipam/ipam.go 526: Trying affinity for 192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.540 [INFO][4888] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.543 [INFO][4888] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.543 [INFO][4888] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.547 [INFO][4888] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.561 [INFO][4888] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.574 [INFO][4888] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.197/26] block=192.168.88.192/26 handle="k8s-pod-network.90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.574 [INFO][4888] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.197/26] handle="k8s-pod-network.90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.574 [INFO][4888] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:30.639348 containerd[1594]: 2026-03-07 01:15:30.574 [INFO][4888] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.197/26] IPv6=[] ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" HandleID="k8s-pod-network.90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.645134 containerd[1594]: 2026-03-07 01:15:30.580 [INFO][4876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mtnf" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"25502944-3541-4549-a9a1-bb47aa3bb3f0", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"", Pod:"coredns-674b8bbfcf-6mtnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia49d34998df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:30.645134 containerd[1594]: 2026-03-07 01:15:30.580 [INFO][4876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.197/32] ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mtnf" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.645134 containerd[1594]: 2026-03-07 01:15:30.580 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia49d34998df ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mtnf" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.645134 containerd[1594]: 2026-03-07 01:15:30.590 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mtnf" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.645134 containerd[1594]: 2026-03-07 01:15:30.590 [INFO][4876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mtnf" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"25502944-3541-4549-a9a1-bb47aa3bb3f0", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f", Pod:"coredns-674b8bbfcf-6mtnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia49d34998df", MAC:"ca:7d:98:ef:0a:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:30.645134 containerd[1594]: 2026-03-07 01:15:30.611 [INFO][4876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mtnf" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:30.820480 containerd[1594]: time="2026-03-07T01:15:30.819320836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:30.820480 containerd[1594]: time="2026-03-07T01:15:30.819439370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:30.820480 containerd[1594]: time="2026-03-07T01:15:30.819469042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:30.827166 containerd[1594]: time="2026-03-07T01:15:30.823465458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:31.070067 containerd[1594]: time="2026-03-07T01:15:31.068626208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mtnf,Uid:25502944-3541-4549-a9a1-bb47aa3bb3f0,Namespace:kube-system,Attempt:1,} returns sandbox id \"90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f\"" Mar 7 01:15:31.097146 containerd[1594]: time="2026-03-07T01:15:31.096885317Z" level=info msg="CreateContainer within sandbox \"90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:15:31.164427 containerd[1594]: time="2026-03-07T01:15:31.163867361Z" level=info msg="CreateContainer within sandbox \"90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae1402e1b0dd9948bc7b686e0d2b94fa01ec050ba7cef4535067ebaa5de04525\"" Mar 7 01:15:31.173487 containerd[1594]: time="2026-03-07T01:15:31.172767486Z" level=info msg="StartContainer for \"ae1402e1b0dd9948bc7b686e0d2b94fa01ec050ba7cef4535067ebaa5de04525\"" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:30.931 [INFO][4917] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:30.933 [INFO][4917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" iface="eth0" netns="/var/run/netns/cni-a9f26652-56ef-b413-87d4-7478eaa528f7" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:30.934 [INFO][4917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" iface="eth0" netns="/var/run/netns/cni-a9f26652-56ef-b413-87d4-7478eaa528f7" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:30.942 [INFO][4917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" iface="eth0" netns="/var/run/netns/cni-a9f26652-56ef-b413-87d4-7478eaa528f7" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:30.942 [INFO][4917] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:30.942 [INFO][4917] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:31.098 [INFO][4962] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" HandleID="k8s-pod-network.5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:31.101 [INFO][4962] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:31.102 [INFO][4962] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:31.151 [WARNING][4962] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" HandleID="k8s-pod-network.5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:31.152 [INFO][4962] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" HandleID="k8s-pod-network.5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:31.171 [INFO][4962] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:31.216748 containerd[1594]: 2026-03-07 01:15:31.207 [INFO][4917] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:31.232938 containerd[1594]: time="2026-03-07T01:15:31.222306947Z" level=info msg="TearDown network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\" successfully" Mar 7 01:15:31.232938 containerd[1594]: time="2026-03-07T01:15:31.222355703Z" level=info msg="StopPodSandbox for \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\" returns successfully" Mar 7 01:15:31.232346 systemd[1]: run-netns-cni\x2da9f26652\x2d56ef\x2db413\x2d87d4\x2d7478eaa528f7.mount: Deactivated successfully. Mar 7 01:15:31.233641 kubelet[2759]: I0307 01:15:31.224433 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v6pgz" podStartSLOduration=49.224405395 podStartE2EDuration="49.224405395s" podCreationTimestamp="2026-03-07 01:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:31.198030769 +0000 UTC m=+54.790403227" watchObservedRunningTime="2026-03-07 01:15:31.224405395 +0000 UTC m=+54.816777853" Mar 7 01:15:31.237937 systemd-networkd[1221]: cali128b24a9ea0: Gained IPv6LL Mar 7 01:15:31.240847 containerd[1594]: time="2026-03-07T01:15:31.240296335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff69f69f-t24f6,Uid:2959f216-b13f-411b-bdca-1a485a69cf02,Namespace:calico-system,Attempt:1,}" Mar 7 01:15:31.451993 containerd[1594]: time="2026-03-07T01:15:31.451916357Z" level=info msg="StartContainer for \"ae1402e1b0dd9948bc7b686e0d2b94fa01ec050ba7cef4535067ebaa5de04525\" returns successfully" Mar 7 01:15:31.605500 containerd[1594]: time="2026-03-07T01:15:31.600563051Z" level=info msg="StopPodSandbox for \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\"" Mar 7 01:15:31.608824 containerd[1594]: time="2026-03-07T01:15:31.601598148Z" level=info msg="StopPodSandbox for \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\"" Mar 7 01:15:31.941184 systemd-networkd[1221]: calia49d34998df: Gained IPv6LL Mar 7 01:15:31.988330 systemd-networkd[1221]: calie4f40e17543: Link UP Mar 7 01:15:31.988769 systemd-networkd[1221]: calie4f40e17543: Gained carrier Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.764 [INFO][5058] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.766 [INFO][5058] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" iface="eth0" netns="/var/run/netns/cni-27529796-d0ac-7c8a-f4c7-9dc5e0c23b7e" Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.767 [INFO][5058] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" iface="eth0" netns="/var/run/netns/cni-27529796-d0ac-7c8a-f4c7-9dc5e0c23b7e" Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.767 [INFO][5058] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" iface="eth0" netns="/var/run/netns/cni-27529796-d0ac-7c8a-f4c7-9dc5e0c23b7e" Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.767 [INFO][5058] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.767 [INFO][5058] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.950 [INFO][5069] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" HandleID="k8s-pod-network.52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.961 [INFO][5069] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.961 [INFO][5069] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.972 [WARNING][5069] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" HandleID="k8s-pod-network.52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.973 [INFO][5069] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" HandleID="k8s-pod-network.52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:31.981 [INFO][5069] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:32.026046 containerd[1594]: 2026-03-07 01:15:32.008 [INFO][5058] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:32.026046 containerd[1594]: time="2026-03-07T01:15:32.024767486Z" level=info msg="TearDown network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\" successfully" Mar 7 01:15:32.026046 containerd[1594]: time="2026-03-07T01:15:32.024807651Z" level=info msg="StopPodSandbox for \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\" returns successfully" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.581 [INFO][4993] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0 calico-apiserver-dff69f69f- calico-system 2959f216-b13f-411b-bdca-1a485a69cf02 1024 0 2026-03-07 01:14:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dff69f69f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4 calico-apiserver-dff69f69f-t24f6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie4f40e17543 [] [] }} ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-t24f6" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.582 [INFO][4993] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-t24f6" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.835 [INFO][5036] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" HandleID="k8s-pod-network.a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.863 [INFO][5036] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" HandleID="k8s-pod-network.a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002775c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", "pod":"calico-apiserver-dff69f69f-t24f6", "timestamp":"2026-03-07 01:15:31.83572735 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000530840)} Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.863 [INFO][5036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.863 [INFO][5036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.863 [INFO][5036] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.867 [INFO][5036] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.875 [INFO][5036] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.887 [INFO][5036] ipam/ipam.go 526: Trying affinity for 192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.897 [INFO][5036] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.901 [INFO][5036] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.901 [INFO][5036] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.904 [INFO][5036] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48 Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.916 [INFO][5036] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.932 [INFO][5036] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.198/26] block=192.168.88.192/26 handle="k8s-pod-network.a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.932 [INFO][5036] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.198/26] handle="k8s-pod-network.a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.932 [INFO][5036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:32.044893 containerd[1594]: 2026-03-07 01:15:31.932 [INFO][5036] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.198/26] IPv6=[] ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" HandleID="k8s-pod-network.a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:32.044182 systemd[1]: run-netns-cni\x2d27529796\x2dd0ac\x2d7c8a\x2df4c7\x2d9dc5e0c23b7e.mount: Deactivated successfully. Mar 7 01:15:32.048235 containerd[1594]: 2026-03-07 01:15:31.966 [INFO][4993] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-t24f6" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0", GenerateName:"calico-apiserver-dff69f69f-", Namespace:"calico-system", SelfLink:"", UID:"2959f216-b13f-411b-bdca-1a485a69cf02", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff69f69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"", Pod:"calico-apiserver-dff69f69f-t24f6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie4f40e17543", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:32.048235 containerd[1594]: 2026-03-07 01:15:31.966 [INFO][4993] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.198/32] ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-t24f6" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:32.048235 containerd[1594]: 2026-03-07 01:15:31.966 [INFO][4993] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4f40e17543 ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-t24f6" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:32.048235 containerd[1594]: 2026-03-07 01:15:31.990 [INFO][4993] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-t24f6" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:32.048235 containerd[1594]: 2026-03-07 01:15:31.992 [INFO][4993] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-t24f6" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0", GenerateName:"calico-apiserver-dff69f69f-", Namespace:"calico-system", SelfLink:"", UID:"2959f216-b13f-411b-bdca-1a485a69cf02", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff69f69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48", Pod:"calico-apiserver-dff69f69f-t24f6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie4f40e17543", MAC:"e2:4d:57:0f:8d:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:32.048235 containerd[1594]: 2026-03-07 01:15:32.022 [INFO][4993] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48" Namespace="calico-system" Pod="calico-apiserver-dff69f69f-t24f6" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:32.048235 containerd[1594]: time="2026-03-07T01:15:32.036682969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-2c4qk,Uid:0772850a-c739-466a-aa06-6d8eb68ff187,Namespace:calico-system,Attempt:1,}" Mar 7 01:15:32.219350 containerd[1594]: time="2026-03-07T01:15:32.217429103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:32.219350 containerd[1594]: time="2026-03-07T01:15:32.217518221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:32.219350 containerd[1594]: time="2026-03-07T01:15:32.217546345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:32.219350 containerd[1594]: time="2026-03-07T01:15:32.217700708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:32.293308 kubelet[2759]: I0307 01:15:32.289216 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6mtnf" podStartSLOduration=50.288595226 podStartE2EDuration="50.288595226s" podCreationTimestamp="2026-03-07 01:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:32.245740265 +0000 UTC m=+55.838112722" watchObservedRunningTime="2026-03-07 01:15:32.288595226 +0000 UTC m=+55.880967682" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:31.934 [INFO][5059] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:31.935 [INFO][5059] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" iface="eth0" netns="/var/run/netns/cni-0fe88ea0-7df8-444d-ae77-b7ca5e99b5ba" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:31.938 [INFO][5059] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" iface="eth0" netns="/var/run/netns/cni-0fe88ea0-7df8-444d-ae77-b7ca5e99b5ba" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:31.939 [INFO][5059] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" iface="eth0" netns="/var/run/netns/cni-0fe88ea0-7df8-444d-ae77-b7ca5e99b5ba" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:31.939 [INFO][5059] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:31.939 [INFO][5059] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:32.216 [INFO][5080] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" HandleID="k8s-pod-network.1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:32.224 [INFO][5080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:32.224 [INFO][5080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:32.285 [WARNING][5080] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" HandleID="k8s-pod-network.1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:32.285 [INFO][5080] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" HandleID="k8s-pod-network.1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:32.289 [INFO][5080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:32.320180 containerd[1594]: 2026-03-07 01:15:32.301 [INFO][5059] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:32.320180 containerd[1594]: time="2026-03-07T01:15:32.320011424Z" level=info msg="TearDown network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\" successfully" Mar 7 01:15:32.320180 containerd[1594]: time="2026-03-07T01:15:32.320050223Z" level=info msg="StopPodSandbox for \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\" returns successfully" Mar 7 01:15:32.334934 containerd[1594]: time="2026-03-07T01:15:32.331851627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74586d586-nnrsd,Uid:67ade84f-aa00-4fe6-9543-b7e8369907a3,Namespace:calico-system,Attempt:1,}" Mar 7 01:15:32.500331 containerd[1594]: time="2026-03-07T01:15:32.500170767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dff69f69f-t24f6,Uid:2959f216-b13f-411b-bdca-1a485a69cf02,Namespace:calico-system,Attempt:1,} returns sandbox id \"a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48\"" Mar 7 01:15:32.598966 systemd-networkd[1221]: calic6253329250: Link UP Mar 7 01:15:32.599751 systemd-networkd[1221]: calic6253329250: Gained carrier Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.344 [INFO][5101] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0 goldmane-5b85766d88- calico-system 0772850a-c739-466a-aa06-6d8eb68ff187 1040 0 2026-03-07 01:14:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4 goldmane-5b85766d88-2c4qk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic6253329250 [] [] }} ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Namespace="calico-system" Pod="goldmane-5b85766d88-2c4qk" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.344 [INFO][5101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Namespace="calico-system" Pod="goldmane-5b85766d88-2c4qk" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.480 [INFO][5151] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" HandleID="k8s-pod-network.49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.506 [INFO][5151] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" HandleID="k8s-pod-network.49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf810), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", "pod":"goldmane-5b85766d88-2c4qk", "timestamp":"2026-03-07 01:15:32.480158884 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000594160)} Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.509 [INFO][5151] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.510 [INFO][5151] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.510 [INFO][5151] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.518 [INFO][5151] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.527 [INFO][5151] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.538 [INFO][5151] ipam/ipam.go 526: Trying affinity for 192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.542 [INFO][5151] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.548 [INFO][5151] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.548 [INFO][5151] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.552 [INFO][5151] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04 Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.561 [INFO][5151] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.576 [INFO][5151] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.199/26] block=192.168.88.192/26 handle="k8s-pod-network.49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.576 [INFO][5151] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.199/26] handle="k8s-pod-network.49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.576 [INFO][5151] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:32.647855 containerd[1594]: 2026-03-07 01:15:32.576 [INFO][5151] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.199/26] IPv6=[] ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" HandleID="k8s-pod-network.49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.649022 containerd[1594]: 2026-03-07 01:15:32.584 [INFO][5101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Namespace="calico-system" Pod="goldmane-5b85766d88-2c4qk" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0772850a-c739-466a-aa06-6d8eb68ff187", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"", Pod:"goldmane-5b85766d88-2c4qk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6253329250", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:32.649022 containerd[1594]: 2026-03-07 01:15:32.584 [INFO][5101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.199/32] ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Namespace="calico-system" Pod="goldmane-5b85766d88-2c4qk" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.649022 containerd[1594]: 2026-03-07 01:15:32.584 [INFO][5101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6253329250 ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Namespace="calico-system" Pod="goldmane-5b85766d88-2c4qk" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.649022 containerd[1594]: 2026-03-07 01:15:32.607 [INFO][5101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Namespace="calico-system" Pod="goldmane-5b85766d88-2c4qk" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.649022 containerd[1594]: 2026-03-07 01:15:32.614 [INFO][5101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Namespace="calico-system" Pod="goldmane-5b85766d88-2c4qk" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0772850a-c739-466a-aa06-6d8eb68ff187", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04", Pod:"goldmane-5b85766d88-2c4qk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6253329250", MAC:"5a:d6:c8:7c:23:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:32.649022 containerd[1594]: 2026-03-07 01:15:32.634 [INFO][5101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04" Namespace="calico-system" Pod="goldmane-5b85766d88-2c4qk" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:32.727634 systemd-networkd[1221]: calibd02d8ec956: Link UP Mar 7 01:15:32.727987 systemd-networkd[1221]: calibd02d8ec956: Gained carrier Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.505 [INFO][5160] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0 calico-kube-controllers-74586d586- calico-system 67ade84f-aa00-4fe6-9543-b7e8369907a3 1043 0 2026-03-07 01:14:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74586d586 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4 calico-kube-controllers-74586d586-nnrsd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibd02d8ec956 [] [] }} ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Namespace="calico-system" Pod="calico-kube-controllers-74586d586-nnrsd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.507 [INFO][5160] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Namespace="calico-system" Pod="calico-kube-controllers-74586d586-nnrsd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.605 [INFO][5181] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" HandleID="k8s-pod-network.7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.634 [INFO][5181] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" HandleID="k8s-pod-network.7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000407a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", "pod":"calico-kube-controllers-74586d586-nnrsd", "timestamp":"2026-03-07 01:15:32.605888731 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000186c60)} Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.634 [INFO][5181] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.635 [INFO][5181] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.635 [INFO][5181] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4' Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.640 [INFO][5181] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.661 [INFO][5181] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.667 [INFO][5181] ipam/ipam.go 526: Trying affinity for 192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.672 [INFO][5181] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.677 [INFO][5181] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.192/26 host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.678 [INFO][5181] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.192/26 handle="k8s-pod-network.7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.681 [INFO][5181] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.689 [INFO][5181] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.192/26 handle="k8s-pod-network.7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.710 [INFO][5181] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.200/26] block=192.168.88.192/26 handle="k8s-pod-network.7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.711 [INFO][5181] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.200/26] handle="k8s-pod-network.7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" host="ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4" Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.711 [INFO][5181] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:32.769353 containerd[1594]: 2026-03-07 01:15:32.711 [INFO][5181] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.200/26] IPv6=[] ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" HandleID="k8s-pod-network.7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.773292 containerd[1594]: 2026-03-07 01:15:32.714 [INFO][5160] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Namespace="calico-system" Pod="calico-kube-controllers-74586d586-nnrsd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0", GenerateName:"calico-kube-controllers-74586d586-", Namespace:"calico-system", SelfLink:"", UID:"67ade84f-aa00-4fe6-9543-b7e8369907a3", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74586d586", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"", Pod:"calico-kube-controllers-74586d586-nnrsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd02d8ec956", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:32.773292 containerd[1594]: 2026-03-07 01:15:32.715 [INFO][5160] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.200/32] ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Namespace="calico-system" Pod="calico-kube-controllers-74586d586-nnrsd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.773292 containerd[1594]: 2026-03-07 01:15:32.715 [INFO][5160] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd02d8ec956 ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Namespace="calico-system" Pod="calico-kube-controllers-74586d586-nnrsd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.773292 containerd[1594]: 2026-03-07 01:15:32.726 [INFO][5160] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Namespace="calico-system" Pod="calico-kube-controllers-74586d586-nnrsd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.773292 containerd[1594]: 2026-03-07 01:15:32.727 [INFO][5160] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Namespace="calico-system" Pod="calico-kube-controllers-74586d586-nnrsd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0", GenerateName:"calico-kube-controllers-74586d586-", Namespace:"calico-system", SelfLink:"", UID:"67ade84f-aa00-4fe6-9543-b7e8369907a3", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74586d586", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f", Pod:"calico-kube-controllers-74586d586-nnrsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd02d8ec956", MAC:"9a:e7:e3:6e:fc:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:32.773292 containerd[1594]: 2026-03-07 01:15:32.760 [INFO][5160] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f" Namespace="calico-system" Pod="calico-kube-controllers-74586d586-nnrsd" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:32.795780 containerd[1594]: time="2026-03-07T01:15:32.795113252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:32.795780 containerd[1594]: time="2026-03-07T01:15:32.795321478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:32.795780 containerd[1594]: time="2026-03-07T01:15:32.795354422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:32.795780 containerd[1594]: time="2026-03-07T01:15:32.795549203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:32.895442 containerd[1594]: time="2026-03-07T01:15:32.894821899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:32.895442 containerd[1594]: time="2026-03-07T01:15:32.894896669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:32.895442 containerd[1594]: time="2026-03-07T01:15:32.894924285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:32.895442 containerd[1594]: time="2026-03-07T01:15:32.895070318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:32.976235 systemd[1]: run-netns-cni\x2d0fe88ea0\x2d7df8\x2d444d\x2dae77\x2db7ca5e99b5ba.mount: Deactivated successfully. Mar 7 01:15:33.006801 containerd[1594]: time="2026-03-07T01:15:33.006751826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-2c4qk,Uid:0772850a-c739-466a-aa06-6d8eb68ff187,Namespace:calico-system,Attempt:1,} returns sandbox id \"49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04\"" Mar 7 01:15:33.076498 containerd[1594]: time="2026-03-07T01:15:33.076071912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74586d586-nnrsd,Uid:67ade84f-aa00-4fe6-9543-b7e8369907a3,Namespace:calico-system,Attempt:1,} returns sandbox id \"7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f\"" Mar 7 01:15:33.156611 systemd-networkd[1221]: calie4f40e17543: Gained IPv6LL Mar 7 01:15:33.660669 containerd[1594]: time="2026-03-07T01:15:33.660600519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:33.662186 containerd[1594]: time="2026-03-07T01:15:33.662117712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:15:33.663159 containerd[1594]: time="2026-03-07T01:15:33.663086853Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:33.667807 containerd[1594]: time="2026-03-07T01:15:33.667770538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:33.669163 containerd[1594]: time="2026-03-07T01:15:33.668964275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 6.233108945s" Mar 7 01:15:33.669163 containerd[1594]: time="2026-03-07T01:15:33.669056973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:15:33.671631 containerd[1594]: time="2026-03-07T01:15:33.671594876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:15:33.675238 containerd[1594]: time="2026-03-07T01:15:33.674940390Z" level=info msg="CreateContainer within sandbox \"f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:15:33.694498 containerd[1594]: time="2026-03-07T01:15:33.692543892Z" level=info msg="CreateContainer within sandbox \"f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6c6dd999f56ae0dc04758e745b529016951c76794401f3d27c41c085e6e898bf\"" Mar 7 01:15:33.698490 containerd[1594]: time="2026-03-07T01:15:33.695584267Z" level=info msg="StartContainer for \"6c6dd999f56ae0dc04758e745b529016951c76794401f3d27c41c085e6e898bf\"" Mar 7 01:15:33.806106 containerd[1594]: time="2026-03-07T01:15:33.806005949Z" level=info msg="StartContainer for \"6c6dd999f56ae0dc04758e745b529016951c76794401f3d27c41c085e6e898bf\" returns successfully" Mar 7 01:15:34.308036 systemd-networkd[1221]: calibd02d8ec956: Gained IPv6LL Mar 7 01:15:34.627695 systemd-networkd[1221]: calic6253329250: Gained IPv6LL Mar 7 01:15:34.856572 containerd[1594]: time="2026-03-07T01:15:34.856176383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:34.859154 containerd[1594]: time="2026-03-07T01:15:34.859070949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:15:34.860640 containerd[1594]: time="2026-03-07T01:15:34.860583394Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:34.866002 containerd[1594]: time="2026-03-07T01:15:34.865945057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:34.868274 containerd[1594]: time="2026-03-07T01:15:34.868049731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.196407748s" Mar 7 01:15:34.868274 containerd[1594]: time="2026-03-07T01:15:34.868097358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:15:34.872617 containerd[1594]: time="2026-03-07T01:15:34.872385559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:15:34.878713 containerd[1594]: time="2026-03-07T01:15:34.878656088Z" level=info msg="CreateContainer within sandbox \"927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:15:34.911205 containerd[1594]: time="2026-03-07T01:15:34.911052472Z" level=info msg="CreateContainer within sandbox \"927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e97e016df977c6a3c706120f3a9c0371061689faa8b8986eea529dfef7f4b261\"" Mar 7 01:15:34.911907 containerd[1594]: time="2026-03-07T01:15:34.911868643Z" level=info msg="StartContainer for \"e97e016df977c6a3c706120f3a9c0371061689faa8b8986eea529dfef7f4b261\"" Mar 7 01:15:34.929731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221063177.mount: Deactivated successfully. Mar 7 01:15:35.070003 containerd[1594]: time="2026-03-07T01:15:35.069034126Z" level=info msg="StartContainer for \"e97e016df977c6a3c706120f3a9c0371061689faa8b8986eea529dfef7f4b261\" returns successfully" Mar 7 01:15:35.090294 containerd[1594]: time="2026-03-07T01:15:35.089938344Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:35.092557 containerd[1594]: time="2026-03-07T01:15:35.091640243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:15:35.096700 containerd[1594]: time="2026-03-07T01:15:35.096549126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 224.119178ms" Mar 7 01:15:35.096700 containerd[1594]: time="2026-03-07T01:15:35.096626288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:15:35.100221 containerd[1594]: time="2026-03-07T01:15:35.099970692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:15:35.105383 containerd[1594]: time="2026-03-07T01:15:35.105294166Z" level=info msg="CreateContainer within sandbox \"a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:15:35.126830 containerd[1594]: time="2026-03-07T01:15:35.126654900Z" level=info msg="CreateContainer within sandbox \"a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"68ad4f0553306df9743099f9ac82a6061d0243b46baef179f65cd60b6bed9884\"" Mar 7 01:15:35.128272 containerd[1594]: time="2026-03-07T01:15:35.128050185Z" level=info msg="StartContainer for \"68ad4f0553306df9743099f9ac82a6061d0243b46baef179f65cd60b6bed9884\"" Mar 7 01:15:35.326018 containerd[1594]: time="2026-03-07T01:15:35.325957438Z" level=info msg="StartContainer for \"68ad4f0553306df9743099f9ac82a6061d0243b46baef179f65cd60b6bed9884\" returns successfully" Mar 7 01:15:35.781632 kubelet[2759]: I0307 01:15:35.781511 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-dff69f69f-wmcc9" podStartSLOduration=35.545477011 podStartE2EDuration="41.781487288s" podCreationTimestamp="2026-03-07 01:14:54 +0000 UTC" firstStartedPulling="2026-03-07 01:15:27.434426269 +0000 UTC m=+51.026798701" lastFinishedPulling="2026-03-07 01:15:33.670436533 +0000 UTC m=+57.262808978" observedRunningTime="2026-03-07 01:15:34.271378596 +0000 UTC m=+57.863751067" watchObservedRunningTime="2026-03-07 01:15:35.781487288 +0000 UTC m=+59.373859741" Mar 7 01:15:35.964056 systemd[1]: run-containerd-runc-k8s.io-68ad4f0553306df9743099f9ac82a6061d0243b46baef179f65cd60b6bed9884-runc.o62IkP.mount: Deactivated successfully. Mar 7 01:15:36.582232 containerd[1594]: time="2026-03-07T01:15:36.580344318Z" level=info msg="StopPodSandbox for \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\"" Mar 7 01:15:36.794809 ntpd[1540]: Listen normally on 9 calie908be40a34 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 01:15:36.798624 ntpd[1540]: 7 Mar 01:15:36 ntpd[1540]: Listen normally on 9 calie908be40a34 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 01:15:36.798624 ntpd[1540]: 7 Mar 01:15:36 ntpd[1540]: Listen normally on 10 cali2e7e63d1b11 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 01:15:36.798624 ntpd[1540]: 7 Mar 01:15:36 ntpd[1540]: Listen normally on 11 cali128b24a9ea0 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 01:15:36.798624 ntpd[1540]: 7 Mar 01:15:36 ntpd[1540]: Listen normally on 12 calia49d34998df [fe80::ecee:eeff:feee:eeee%11]:123 Mar 7 01:15:36.798624 ntpd[1540]: 7 Mar 01:15:36 ntpd[1540]: Listen normally on 13 calie4f40e17543 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 7 01:15:36.798624 ntpd[1540]: 7 Mar 01:15:36 ntpd[1540]: Listen normally on 14 calic6253329250 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 7 01:15:36.798624 ntpd[1540]: 7 Mar 01:15:36 ntpd[1540]: Listen normally on 15 calibd02d8ec956 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 7 01:15:36.795541 ntpd[1540]: Listen normally on 10 cali2e7e63d1b11 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 01:15:36.795602 ntpd[1540]: Listen normally on 11 cali128b24a9ea0 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 01:15:36.795656 ntpd[1540]: Listen normally on 12 calia49d34998df [fe80::ecee:eeff:feee:eeee%11]:123 Mar 7 01:15:36.795708 ntpd[1540]: Listen normally on 13 calie4f40e17543 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 7 01:15:36.795761 ntpd[1540]: Listen normally on 14 calic6253329250 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 7 01:15:36.795827 ntpd[1540]: Listen normally on 15 calibd02d8ec956 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:36.835 [WARNING][5453] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"25502944-3541-4549-a9a1-bb47aa3bb3f0", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f", Pod:"coredns-674b8bbfcf-6mtnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia49d34998df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:36.841 [INFO][5453] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:36.841 [INFO][5453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" iface="eth0" netns="" Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:36.841 [INFO][5453] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:36.841 [INFO][5453] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:37.091 [INFO][5466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" HandleID="k8s-pod-network.537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:37.093 [INFO][5466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:37.093 [INFO][5466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:37.113 [WARNING][5466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" HandleID="k8s-pod-network.537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:37.113 [INFO][5466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" HandleID="k8s-pod-network.537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:37.118 [INFO][5466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:37.128177 containerd[1594]: 2026-03-07 01:15:37.122 [INFO][5453] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:37.128177 containerd[1594]: time="2026-03-07T01:15:37.128019393Z" level=info msg="TearDown network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\" successfully" Mar 7 01:15:37.128177 containerd[1594]: time="2026-03-07T01:15:37.128057280Z" level=info msg="StopPodSandbox for \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\" returns successfully" Mar 7 01:15:37.129922 containerd[1594]: time="2026-03-07T01:15:37.129492181Z" level=info msg="RemovePodSandbox for \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\"" Mar 7 01:15:37.129922 containerd[1594]: time="2026-03-07T01:15:37.129539503Z" level=info msg="Forcibly stopping sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\"" Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.327 [WARNING][5480] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"25502944-3541-4549-a9a1-bb47aa3bb3f0", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"90dfa3ca2ff1ea2a202915484ca9b7f5793571368aaa5e91079eecc1c950024f", Pod:"coredns-674b8bbfcf-6mtnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia49d34998df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.327 [INFO][5480] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.327 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" iface="eth0" netns="" Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.327 [INFO][5480] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.327 [INFO][5480] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.439 [INFO][5487] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" HandleID="k8s-pod-network.537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.440 [INFO][5487] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.440 [INFO][5487] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.456 [WARNING][5487] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" HandleID="k8s-pod-network.537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.456 [INFO][5487] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" HandleID="k8s-pod-network.537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--6mtnf-eth0" Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.459 [INFO][5487] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:37.473404 containerd[1594]: 2026-03-07 01:15:37.465 [INFO][5480] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07" Mar 7 01:15:37.474233 containerd[1594]: time="2026-03-07T01:15:37.473456675Z" level=info msg="TearDown network for sandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\" successfully" Mar 7 01:15:37.492624 containerd[1594]: time="2026-03-07T01:15:37.491228615Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:15:37.492624 containerd[1594]: time="2026-03-07T01:15:37.491344001Z" level=info msg="RemovePodSandbox \"537469677cb9a853afeb5678cca88feac2ea8c9467aac143b5b06eedeb664c07\" returns successfully" Mar 7 01:15:37.492624 containerd[1594]: time="2026-03-07T01:15:37.492230623Z" level=info msg="StopPodSandbox for \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\"" Mar 7 01:15:37.519985 kubelet[2759]: I0307 01:15:37.519331 2759 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.625 [WARNING][5501] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0", GenerateName:"calico-kube-controllers-74586d586-", Namespace:"calico-system", SelfLink:"", UID:"67ade84f-aa00-4fe6-9543-b7e8369907a3", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74586d586", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f", Pod:"calico-kube-controllers-74586d586-nnrsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd02d8ec956", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.626 [INFO][5501] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.627 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" iface="eth0" netns="" Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.627 [INFO][5501] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.627 [INFO][5501] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.730 [INFO][5509] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" HandleID="k8s-pod-network.1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.730 [INFO][5509] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.730 [INFO][5509] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.752 [WARNING][5509] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" HandleID="k8s-pod-network.1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.752 [INFO][5509] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" HandleID="k8s-pod-network.1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.756 [INFO][5509] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:37.767923 containerd[1594]: 2026-03-07 01:15:37.760 [INFO][5501] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:37.767923 containerd[1594]: time="2026-03-07T01:15:37.766728831Z" level=info msg="TearDown network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\" successfully" Mar 7 01:15:37.767923 containerd[1594]: time="2026-03-07T01:15:37.766766722Z" level=info msg="StopPodSandbox for \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\" returns successfully" Mar 7 01:15:37.772062 containerd[1594]: time="2026-03-07T01:15:37.768064590Z" level=info msg="RemovePodSandbox for \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\"" Mar 7 01:15:37.772062 containerd[1594]: time="2026-03-07T01:15:37.768105258Z" level=info msg="Forcibly stopping sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\"" Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:37.911 [WARNING][5523] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0", GenerateName:"calico-kube-controllers-74586d586-", Namespace:"calico-system", SelfLink:"", UID:"67ade84f-aa00-4fe6-9543-b7e8369907a3", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74586d586", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f", Pod:"calico-kube-controllers-74586d586-nnrsd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd02d8ec956", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:37.911 [INFO][5523] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:37.911 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" iface="eth0" netns="" Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:37.911 [INFO][5523] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:37.911 [INFO][5523] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:38.002 [INFO][5530] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" HandleID="k8s-pod-network.1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:38.003 [INFO][5530] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:38.003 [INFO][5530] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:38.031 [WARNING][5530] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" HandleID="k8s-pod-network.1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:38.031 [INFO][5530] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" HandleID="k8s-pod-network.1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--kube--controllers--74586d586--nnrsd-eth0" Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:38.038 [INFO][5530] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:38.058478 containerd[1594]: 2026-03-07 01:15:38.051 [INFO][5523] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7" Mar 7 01:15:38.058478 containerd[1594]: time="2026-03-07T01:15:38.058118476Z" level=info msg="TearDown network for sandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\" successfully" Mar 7 01:15:38.071185 containerd[1594]: time="2026-03-07T01:15:38.071117815Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:15:38.071348 containerd[1594]: time="2026-03-07T01:15:38.071236298Z" level=info msg="RemovePodSandbox \"1926e0ffb5fb3bc1ea83e9e122d667ce725d741c0fdb0c505028090936eb24d7\" returns successfully" Mar 7 01:15:38.073544 containerd[1594]: time="2026-03-07T01:15:38.073207074Z" level=info msg="StopPodSandbox for \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\"" Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.193 [WARNING][5550] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0772850a-c739-466a-aa06-6d8eb68ff187", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04", Pod:"goldmane-5b85766d88-2c4qk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6253329250", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.194 [INFO][5550] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.194 [INFO][5550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" iface="eth0" netns="" Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.194 [INFO][5550] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.195 [INFO][5550] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.264 [INFO][5558] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" HandleID="k8s-pod-network.52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.264 [INFO][5558] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.264 [INFO][5558] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.278 [WARNING][5558] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" HandleID="k8s-pod-network.52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.278 [INFO][5558] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" HandleID="k8s-pod-network.52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.281 [INFO][5558] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:38.288060 containerd[1594]: 2026-03-07 01:15:38.284 [INFO][5550] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:38.289914 containerd[1594]: time="2026-03-07T01:15:38.288174510Z" level=info msg="TearDown network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\" successfully" Mar 7 01:15:38.289914 containerd[1594]: time="2026-03-07T01:15:38.288342060Z" level=info msg="StopPodSandbox for \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\" returns successfully" Mar 7 01:15:38.292483 containerd[1594]: time="2026-03-07T01:15:38.292004190Z" level=info msg="RemovePodSandbox for \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\"" Mar 7 01:15:38.292483 containerd[1594]: time="2026-03-07T01:15:38.292051734Z" level=info msg="Forcibly stopping sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\"" Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.440 [WARNING][5573] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0772850a-c739-466a-aa06-6d8eb68ff187", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04", Pod:"goldmane-5b85766d88-2c4qk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6253329250", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.440 [INFO][5573] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.440 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" iface="eth0" netns="" Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.441 [INFO][5573] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.441 [INFO][5573] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.505 [INFO][5580] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" HandleID="k8s-pod-network.52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.505 [INFO][5580] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.505 [INFO][5580] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.516 [WARNING][5580] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" HandleID="k8s-pod-network.52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.516 [INFO][5580] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" HandleID="k8s-pod-network.52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-goldmane--5b85766d88--2c4qk-eth0" Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.519 [INFO][5580] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:38.528158 containerd[1594]: 2026-03-07 01:15:38.524 [INFO][5573] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08" Mar 7 01:15:38.529186 containerd[1594]: time="2026-03-07T01:15:38.529148264Z" level=info msg="TearDown network for sandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\" successfully" Mar 7 01:15:38.535931 containerd[1594]: time="2026-03-07T01:15:38.535873213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:15:38.536315 containerd[1594]: time="2026-03-07T01:15:38.535951600Z" level=info msg="RemovePodSandbox \"52737d58371dc786018832a5056bda17f954e0d5e73936311adf1dc6e978ae08\" returns successfully" Mar 7 01:15:38.536530 containerd[1594]: time="2026-03-07T01:15:38.536473837Z" level=info msg="StopPodSandbox for \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\"" Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.606 [WARNING][5594] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0", GenerateName:"calico-apiserver-dff69f69f-", Namespace:"calico-system", SelfLink:"", UID:"fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff69f69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e", Pod:"calico-apiserver-dff69f69f-wmcc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie908be40a34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.606 [INFO][5594] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.606 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" iface="eth0" netns="" Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.606 [INFO][5594] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.606 [INFO][5594] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.659 [INFO][5601] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" HandleID="k8s-pod-network.d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.659 [INFO][5601] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.659 [INFO][5601] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.668 [WARNING][5601] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" HandleID="k8s-pod-network.d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.669 [INFO][5601] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" HandleID="k8s-pod-network.d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.671 [INFO][5601] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:38.678268 containerd[1594]: 2026-03-07 01:15:38.674 [INFO][5594] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:38.678268 containerd[1594]: time="2026-03-07T01:15:38.678117263Z" level=info msg="TearDown network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\" successfully" Mar 7 01:15:38.678268 containerd[1594]: time="2026-03-07T01:15:38.678150133Z" level=info msg="StopPodSandbox for \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\" returns successfully" Mar 7 01:15:38.680055 containerd[1594]: time="2026-03-07T01:15:38.680014455Z" level=info msg="RemovePodSandbox for \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\"" Mar 7 01:15:38.680188 containerd[1594]: time="2026-03-07T01:15:38.680066669Z" level=info msg="Forcibly stopping sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\"" Mar 7 01:15:38.757139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131915444.mount: Deactivated successfully. Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.764 [WARNING][5615] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0", GenerateName:"calico-apiserver-dff69f69f-", Namespace:"calico-system", SelfLink:"", UID:"fb5bb5f4-0835-460d-ace6-a3a9b6df7e6a", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff69f69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"f3d0cb872d3390097f66c3e4f8a897d04fc47197a15382d955029e553c0e9e0e", Pod:"calico-apiserver-dff69f69f-wmcc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie908be40a34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.764 [INFO][5615] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.764 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" iface="eth0" netns="" Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.764 [INFO][5615] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.764 [INFO][5615] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.811 [INFO][5623] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" HandleID="k8s-pod-network.d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.812 [INFO][5623] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.812 [INFO][5623] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.824 [WARNING][5623] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" HandleID="k8s-pod-network.d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.825 [INFO][5623] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" HandleID="k8s-pod-network.d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--wmcc9-eth0" Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.829 [INFO][5623] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:38.837225 containerd[1594]: 2026-03-07 01:15:38.833 [INFO][5615] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a" Mar 7 01:15:38.837225 containerd[1594]: time="2026-03-07T01:15:38.836969708Z" level=info msg="TearDown network for sandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\" successfully" Mar 7 01:15:38.844295 containerd[1594]: time="2026-03-07T01:15:38.844018565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:15:38.844295 containerd[1594]: time="2026-03-07T01:15:38.844120244Z" level=info msg="RemovePodSandbox \"d2ce22fe99ef228df3694606cc59393465e7e8c99198ec53e32b8787a3a5f75a\" returns successfully" Mar 7 01:15:38.845891 containerd[1594]: time="2026-03-07T01:15:38.845516044Z" level=info msg="StopPodSandbox for \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\"" Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.926 [WARNING][5642] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.926 [INFO][5642] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.926 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" iface="eth0" netns="" Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.926 [INFO][5642] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.926 [INFO][5642] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.977 [INFO][5649] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" HandleID="k8s-pod-network.9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.978 [INFO][5649] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.978 [INFO][5649] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.989 [WARNING][5649] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" HandleID="k8s-pod-network.9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.989 [INFO][5649] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" HandleID="k8s-pod-network.9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.990 [INFO][5649] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:38.996770 containerd[1594]: 2026-03-07 01:15:38.994 [INFO][5642] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:38.997771 containerd[1594]: time="2026-03-07T01:15:38.996885482Z" level=info msg="TearDown network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\" successfully" Mar 7 01:15:38.997771 containerd[1594]: time="2026-03-07T01:15:38.996923894Z" level=info msg="StopPodSandbox for \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\" returns successfully" Mar 7 01:15:38.998619 containerd[1594]: time="2026-03-07T01:15:38.998189621Z" level=info msg="RemovePodSandbox for \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\"" Mar 7 01:15:38.998619 containerd[1594]: time="2026-03-07T01:15:38.998235207Z" level=info msg="Forcibly stopping sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\"" Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.075 [WARNING][5663] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" WorkloadEndpoint="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.076 [INFO][5663] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.076 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" iface="eth0" netns="" Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.076 [INFO][5663] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.076 [INFO][5663] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.118 [INFO][5671] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" HandleID="k8s-pod-network.9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.119 [INFO][5671] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.119 [INFO][5671] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.133 [WARNING][5671] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" HandleID="k8s-pod-network.9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.136 [INFO][5671] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" HandleID="k8s-pod-network.9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-whisker--6f8dfcdf94--vxblc-eth0" Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.142 [INFO][5671] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:39.148212 containerd[1594]: 2026-03-07 01:15:39.144 [INFO][5663] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098" Mar 7 01:15:39.148212 containerd[1594]: time="2026-03-07T01:15:39.147090636Z" level=info msg="TearDown network for sandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\" successfully" Mar 7 01:15:39.155199 containerd[1594]: time="2026-03-07T01:15:39.155029552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:15:39.155376 containerd[1594]: time="2026-03-07T01:15:39.155230513Z" level=info msg="RemovePodSandbox \"9215d5f1cdfc5b81447a8da150e0882006f28f20c49d9028759a33163ba50098\" returns successfully" Mar 7 01:15:39.157194 containerd[1594]: time="2026-03-07T01:15:39.156876784Z" level=info msg="StopPodSandbox for \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\"" Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.236 [WARNING][5685] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b92beb4-baab-4ede-a5b0-75188956c922", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737", Pod:"coredns-674b8bbfcf-v6pgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali128b24a9ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.236 [INFO][5685] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.236 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" iface="eth0" netns="" Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.236 [INFO][5685] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.236 [INFO][5685] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.282 [INFO][5693] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" HandleID="k8s-pod-network.e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.285 [INFO][5693] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.285 [INFO][5693] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.299 [WARNING][5693] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" HandleID="k8s-pod-network.e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.300 [INFO][5693] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" HandleID="k8s-pod-network.e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.302 [INFO][5693] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:39.308290 containerd[1594]: 2026-03-07 01:15:39.304 [INFO][5685] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:39.309480 containerd[1594]: time="2026-03-07T01:15:39.308344875Z" level=info msg="TearDown network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\" successfully" Mar 7 01:15:39.309480 containerd[1594]: time="2026-03-07T01:15:39.308561773Z" level=info msg="StopPodSandbox for \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\" returns successfully" Mar 7 01:15:39.309576 containerd[1594]: time="2026-03-07T01:15:39.309502677Z" level=info msg="RemovePodSandbox for \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\"" Mar 7 01:15:39.309576 containerd[1594]: time="2026-03-07T01:15:39.309544085Z" level=info msg="Forcibly stopping sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\"" Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.374 [WARNING][5707] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b92beb4-baab-4ede-a5b0-75188956c922", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"82809a44007dede72b2e4bbb03225203fd448598fc19059aa491c07107cd0737", Pod:"coredns-674b8bbfcf-v6pgz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali128b24a9ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.374 [INFO][5707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.374 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" iface="eth0" netns="" Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.374 [INFO][5707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.375 [INFO][5707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.436 [INFO][5714] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" HandleID="k8s-pod-network.e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.437 [INFO][5714] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.437 [INFO][5714] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.453 [WARNING][5714] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" HandleID="k8s-pod-network.e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.453 [INFO][5714] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" HandleID="k8s-pod-network.e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-coredns--674b8bbfcf--v6pgz-eth0" Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.455 [INFO][5714] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:39.461829 containerd[1594]: 2026-03-07 01:15:39.458 [INFO][5707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea" Mar 7 01:15:39.462985 containerd[1594]: time="2026-03-07T01:15:39.461879788Z" level=info msg="TearDown network for sandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\" successfully" Mar 7 01:15:39.473648 containerd[1594]: time="2026-03-07T01:15:39.473583456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:15:39.473911 containerd[1594]: time="2026-03-07T01:15:39.473685031Z" level=info msg="RemovePodSandbox \"e8c37e9f89c325c02a9dca6f43cb4cbf6128a6cbea3f3358cf611ca4d33709ea\" returns successfully" Mar 7 01:15:39.474582 containerd[1594]: time="2026-03-07T01:15:39.474520195Z" level=info msg="StopPodSandbox for \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\"" Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.546 [WARNING][5728] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0", GenerateName:"calico-apiserver-dff69f69f-", Namespace:"calico-system", SelfLink:"", UID:"2959f216-b13f-411b-bdca-1a485a69cf02", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff69f69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48", Pod:"calico-apiserver-dff69f69f-t24f6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie4f40e17543", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.546 [INFO][5728] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.546 [INFO][5728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" iface="eth0" netns="" Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.546 [INFO][5728] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.546 [INFO][5728] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.591 [INFO][5736] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" HandleID="k8s-pod-network.5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.591 [INFO][5736] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.591 [INFO][5736] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.606 [WARNING][5736] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" HandleID="k8s-pod-network.5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.606 [INFO][5736] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" HandleID="k8s-pod-network.5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.608 [INFO][5736] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:39.615407 containerd[1594]: 2026-03-07 01:15:39.612 [INFO][5728] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:39.617092 containerd[1594]: time="2026-03-07T01:15:39.615992531Z" level=info msg="TearDown network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\" successfully" Mar 7 01:15:39.617092 containerd[1594]: time="2026-03-07T01:15:39.616039165Z" level=info msg="StopPodSandbox for \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\" returns successfully" Mar 7 01:15:39.617092 containerd[1594]: time="2026-03-07T01:15:39.616958809Z" level=info msg="RemovePodSandbox for \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\"" Mar 7 01:15:39.617092 containerd[1594]: time="2026-03-07T01:15:39.617000588Z" level=info msg="Forcibly stopping sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\"" Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.703 [WARNING][5750] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0", GenerateName:"calico-apiserver-dff69f69f-", Namespace:"calico-system", SelfLink:"", UID:"2959f216-b13f-411b-bdca-1a485a69cf02", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dff69f69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"a1ab6f57d73fa88e66098d3159aa219689cc764e8aff9db39da0461b2054be48", Pod:"calico-apiserver-dff69f69f-t24f6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie4f40e17543", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.704 [INFO][5750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.704 [INFO][5750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" iface="eth0" netns="" Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.704 [INFO][5750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.704 [INFO][5750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.754 [INFO][5757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" HandleID="k8s-pod-network.5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.755 [INFO][5757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.755 [INFO][5757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.771 [WARNING][5757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" HandleID="k8s-pod-network.5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.771 [INFO][5757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" HandleID="k8s-pod-network.5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-calico--apiserver--dff69f69f--t24f6-eth0" Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.774 [INFO][5757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:39.803157 containerd[1594]: 2026-03-07 01:15:39.784 [INFO][5750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3" Mar 7 01:15:39.803157 containerd[1594]: time="2026-03-07T01:15:39.799616481Z" level=info msg="TearDown network for sandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\" successfully" Mar 7 01:15:39.816096 containerd[1594]: time="2026-03-07T01:15:39.816046607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:15:39.816457 containerd[1594]: time="2026-03-07T01:15:39.816352574Z" level=info msg="RemovePodSandbox \"5b024f99333ac4f4b3dc2c63d85ba28bb418e8d0002b53faa677c91d2cf093c3\" returns successfully" Mar 7 01:15:39.820678 containerd[1594]: time="2026-03-07T01:15:39.820642030Z" level=info msg="StopPodSandbox for \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\"" Mar 7 01:15:39.924525 containerd[1594]: time="2026-03-07T01:15:39.924062983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:39.926719 containerd[1594]: time="2026-03-07T01:15:39.926625933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:15:39.928552 containerd[1594]: time="2026-03-07T01:15:39.928505353Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:39.934802 containerd[1594]: time="2026-03-07T01:15:39.934001812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:39.935016 containerd[1594]: time="2026-03-07T01:15:39.934979664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.834959852s" Mar 7 01:15:39.935470 containerd[1594]: time="2026-03-07T01:15:39.935439479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:15:39.940241 containerd[1594]: time="2026-03-07T01:15:39.939436759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:15:39.944241 containerd[1594]: time="2026-03-07T01:15:39.944028147Z" level=info msg="CreateContainer within sandbox \"49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:15:39.969707 containerd[1594]: time="2026-03-07T01:15:39.969639949Z" level=info msg="CreateContainer within sandbox \"49ca0f93936c8787a96e578b1667b28dffddf584c7f876a8d3a88d1c9c23ea04\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"98ff90f9e2571aa57268e95858ee72cbeb9b67f4536838f7ba80bc1bf947ddc9\"" Mar 7 01:15:39.975969 containerd[1594]: time="2026-03-07T01:15:39.975926983Z" level=info msg="StartContainer for \"98ff90f9e2571aa57268e95858ee72cbeb9b67f4536838f7ba80bc1bf947ddc9\"" Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.917 [WARNING][5782] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e38da1e-df53-4611-ac35-2d4dd975c9f5", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5", Pod:"csi-node-driver-njdzt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e7e63d1b11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.917 [INFO][5782] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.917 [INFO][5782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" iface="eth0" netns="" Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.917 [INFO][5782] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.917 [INFO][5782] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.977 [INFO][5794] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" HandleID="k8s-pod-network.75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.977 [INFO][5794] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.978 [INFO][5794] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.992 [WARNING][5794] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" HandleID="k8s-pod-network.75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.992 [INFO][5794] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" HandleID="k8s-pod-network.75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:39.995 [INFO][5794] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:40.003467 containerd[1594]: 2026-03-07 01:15:40.000 [INFO][5782] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:40.005434 containerd[1594]: time="2026-03-07T01:15:40.003527241Z" level=info msg="TearDown network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\" successfully" Mar 7 01:15:40.005434 containerd[1594]: time="2026-03-07T01:15:40.003562236Z" level=info msg="StopPodSandbox for \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\" returns successfully" Mar 7 01:15:40.005434 containerd[1594]: time="2026-03-07T01:15:40.004335400Z" level=info msg="RemovePodSandbox for \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\"" Mar 7 01:15:40.005434 containerd[1594]: time="2026-03-07T01:15:40.004413810Z" level=info msg="Forcibly stopping sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\"" Mar 7 01:15:40.152317 containerd[1594]: time="2026-03-07T01:15:40.152157720Z" level=info msg="StartContainer for \"98ff90f9e2571aa57268e95858ee72cbeb9b67f4536838f7ba80bc1bf947ddc9\" returns successfully" Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.106 [WARNING][5817] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e38da1e-df53-4611-ac35-2d4dd975c9f5", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260306-2100-0f9cfe8e89f1fcd643d4", ContainerID:"927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5", Pod:"csi-node-driver-njdzt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2e7e63d1b11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.106 [INFO][5817] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.106 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" iface="eth0" netns="" Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.106 [INFO][5817] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.106 [INFO][5817] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.151 [INFO][5838] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" HandleID="k8s-pod-network.75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.151 [INFO][5838] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.151 [INFO][5838] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.166 [WARNING][5838] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" HandleID="k8s-pod-network.75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.166 [INFO][5838] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" HandleID="k8s-pod-network.75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Workload="ci--4081--3--6--nightly--20260306--2100--0f9cfe8e89f1fcd643d4-k8s-csi--node--driver--njdzt-eth0" Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.169 [INFO][5838] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:15:40.178330 containerd[1594]: 2026-03-07 01:15:40.174 [INFO][5817] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189" Mar 7 01:15:40.178330 containerd[1594]: time="2026-03-07T01:15:40.178024991Z" level=info msg="TearDown network for sandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\" successfully" Mar 7 01:15:40.183746 containerd[1594]: time="2026-03-07T01:15:40.183314844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:15:40.183746 containerd[1594]: time="2026-03-07T01:15:40.183481545Z" level=info msg="RemovePodSandbox \"75968f9206ae7955054a3c8ff7e49ecc34b21482bf0d9229e917c007810c8189\" returns successfully" Mar 7 01:15:40.581888 kubelet[2759]: I0307 01:15:40.581672 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-dff69f69f-t24f6" podStartSLOduration=43.990295589 podStartE2EDuration="46.581646957s" podCreationTimestamp="2026-03-07 01:14:54 +0000 UTC" firstStartedPulling="2026-03-07 01:15:32.507042428 +0000 UTC m=+56.099414872" lastFinishedPulling="2026-03-07 01:15:35.098393793 +0000 UTC m=+58.690766240" observedRunningTime="2026-03-07 01:15:36.532949021 +0000 UTC m=+60.125321481" watchObservedRunningTime="2026-03-07 01:15:40.581646957 +0000 UTC m=+64.174019416" Mar 7 01:15:40.588178 kubelet[2759]: I0307 01:15:40.584920 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-2c4qk" podStartSLOduration=38.658886349 podStartE2EDuration="45.584893712s" podCreationTimestamp="2026-03-07 01:14:55 +0000 UTC" firstStartedPulling="2026-03-07 01:15:33.011287269 +0000 UTC m=+56.603659704" lastFinishedPulling="2026-03-07 01:15:39.937294621 +0000 UTC m=+63.529667067" observedRunningTime="2026-03-07 01:15:40.579306711 +0000 UTC m=+64.171679193" watchObservedRunningTime="2026-03-07 01:15:40.584893712 +0000 UTC m=+64.177266170" Mar 7 01:15:42.698488 containerd[1594]: time="2026-03-07T01:15:42.698423778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:42.699934 containerd[1594]: time="2026-03-07T01:15:42.699822636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:15:42.700955 containerd[1594]: time="2026-03-07T01:15:42.700915593Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:42.704216 containerd[1594]: time="2026-03-07T01:15:42.704088214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:42.705770 containerd[1594]: time="2026-03-07T01:15:42.705211417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.765527898s" Mar 7 01:15:42.705770 containerd[1594]: time="2026-03-07T01:15:42.705263450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:15:42.708515 containerd[1594]: time="2026-03-07T01:15:42.707326003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:15:42.736705 containerd[1594]: time="2026-03-07T01:15:42.736647406Z" level=info msg="CreateContainer within sandbox \"7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:15:42.757173 containerd[1594]: time="2026-03-07T01:15:42.754205718Z" level=info msg="CreateContainer within sandbox \"7b6974e8a321998140d04420c6a2dc5178b337382ab27e7191f96a1d8417131f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a049dcfd1c7b4fd9febc77717d91cd0008fe29252cac15cd9e25cf208b462645\"" Mar 7 01:15:42.757173 containerd[1594]: time="2026-03-07T01:15:42.755460669Z" level=info msg="StartContainer for \"a049dcfd1c7b4fd9febc77717d91cd0008fe29252cac15cd9e25cf208b462645\"" Mar 7 01:15:42.869301 containerd[1594]: time="2026-03-07T01:15:42.869240049Z" level=info msg="StartContainer for \"a049dcfd1c7b4fd9febc77717d91cd0008fe29252cac15cd9e25cf208b462645\" returns successfully" Mar 7 01:15:43.827384 kubelet[2759]: I0307 01:15:43.825323 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74586d586-nnrsd" podStartSLOduration=38.199789682 podStartE2EDuration="47.825297603s" podCreationTimestamp="2026-03-07 01:14:56 +0000 UTC" firstStartedPulling="2026-03-07 01:15:33.081145971 +0000 UTC m=+56.673518403" lastFinishedPulling="2026-03-07 01:15:42.706653885 +0000 UTC m=+66.299026324" observedRunningTime="2026-03-07 01:15:43.633476405 +0000 UTC m=+67.225848868" watchObservedRunningTime="2026-03-07 01:15:43.825297603 +0000 UTC m=+67.417670060" Mar 7 01:15:43.939800 systemd[1]: Started sshd@7-10.128.0.18:22-68.220.241.50:42428.service - OpenSSH per-connection server daemon (68.220.241.50:42428). Mar 7 01:15:44.196021 sshd[5992]: Accepted publickey for core from 68.220.241.50 port 42428 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:15:44.198690 sshd[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:44.211078 systemd-logind[1570]: New session 8 of user core. Mar 7 01:15:44.217505 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:15:44.269734 containerd[1594]: time="2026-03-07T01:15:44.269676457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:44.271462 containerd[1594]: time="2026-03-07T01:15:44.271398002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:15:44.272771 containerd[1594]: time="2026-03-07T01:15:44.272706189Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:44.276237 containerd[1594]: time="2026-03-07T01:15:44.276159846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:44.277751 containerd[1594]: time="2026-03-07T01:15:44.277281499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.56991025s" Mar 7 01:15:44.277751 containerd[1594]: time="2026-03-07T01:15:44.277327732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:15:44.283297 containerd[1594]: time="2026-03-07T01:15:44.283248474Z" level=info msg="CreateContainer within sandbox \"927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:15:44.304084 containerd[1594]: time="2026-03-07T01:15:44.304008400Z" level=info msg="CreateContainer within sandbox \"927ea0507bd0d4d601b836de3a7ff8351911022e4d3bdc9ac22e5358f5ce3af5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"93839b2692ff8289580f83be27f3adc7d02c1df9c22749e739c91ec6a63b302d\"" Mar 7 01:15:44.305108 containerd[1594]: time="2026-03-07T01:15:44.304969816Z" level=info msg="StartContainer for \"93839b2692ff8289580f83be27f3adc7d02c1df9c22749e739c91ec6a63b302d\"" Mar 7 01:15:44.454444 containerd[1594]: time="2026-03-07T01:15:44.452139700Z" level=info msg="StartContainer for \"93839b2692ff8289580f83be27f3adc7d02c1df9c22749e739c91ec6a63b302d\" returns successfully" Mar 7 01:15:44.556673 sshd[5992]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:44.563948 systemd[1]: sshd@7-10.128.0.18:22-68.220.241.50:42428.service: Deactivated successfully. Mar 7 01:15:44.570170 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:15:44.571761 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:15:44.573913 systemd-logind[1570]: Removed session 8. Mar 7 01:15:44.805933 kubelet[2759]: I0307 01:15:44.805807 2759 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:15:44.806520 kubelet[2759]: I0307 01:15:44.806158 2759 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:15:49.596480 systemd[1]: Started sshd@8-10.128.0.18:22-68.220.241.50:42438.service - OpenSSH per-connection server daemon (68.220.241.50:42438). Mar 7 01:15:49.830155 sshd[6048]: Accepted publickey for core from 68.220.241.50 port 42438 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:15:49.832143 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:49.838847 systemd-logind[1570]: New session 9 of user core. Mar 7 01:15:49.843864 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:15:50.085467 sshd[6048]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:50.093244 systemd[1]: sshd@8-10.128.0.18:22-68.220.241.50:42438.service: Deactivated successfully. Mar 7 01:15:50.099449 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:15:50.099523 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:15:50.102397 systemd-logind[1570]: Removed session 9. Mar 7 01:15:55.122096 systemd[1]: Started sshd@9-10.128.0.18:22-68.220.241.50:42252.service - OpenSSH per-connection server daemon (68.220.241.50:42252). Mar 7 01:15:55.351861 sshd[6111]: Accepted publickey for core from 68.220.241.50 port 42252 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:15:55.354098 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:55.360606 systemd-logind[1570]: New session 10 of user core. Mar 7 01:15:55.367899 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:15:55.634529 sshd[6111]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:55.639693 systemd[1]: sshd@9-10.128.0.18:22-68.220.241.50:42252.service: Deactivated successfully. Mar 7 01:15:55.646452 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:15:55.649112 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:15:55.650829 systemd-logind[1570]: Removed session 10. Mar 7 01:16:00.674476 systemd[1]: Started sshd@10-10.128.0.18:22-68.220.241.50:42258.service - OpenSSH per-connection server daemon (68.220.241.50:42258). Mar 7 01:16:00.907322 sshd[6136]: Accepted publickey for core from 68.220.241.50 port 42258 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:00.909261 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:00.916000 systemd-logind[1570]: New session 11 of user core. Mar 7 01:16:00.925832 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:16:01.206411 sshd[6136]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:01.218196 systemd[1]: sshd@10-10.128.0.18:22-68.220.241.50:42258.service: Deactivated successfully. Mar 7 01:16:01.236114 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:16:01.236935 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:16:01.243728 systemd-logind[1570]: Removed session 11. Mar 7 01:16:04.777631 kubelet[2759]: I0307 01:16:04.777048 2759 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:16:04.824274 kubelet[2759]: I0307 01:16:04.824179 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-njdzt" podStartSLOduration=54.027711481 podStartE2EDuration="1m9.824158675s" podCreationTimestamp="2026-03-07 01:14:55 +0000 UTC" firstStartedPulling="2026-03-07 01:15:28.482327149 +0000 UTC m=+52.074699581" lastFinishedPulling="2026-03-07 01:15:44.278774337 +0000 UTC m=+67.871146775" observedRunningTime="2026-03-07 01:15:44.612638444 +0000 UTC m=+68.205010899" watchObservedRunningTime="2026-03-07 01:16:04.824158675 +0000 UTC m=+88.416531158" Mar 7 01:16:06.244783 systemd[1]: Started sshd@11-10.128.0.18:22-68.220.241.50:34614.service - OpenSSH per-connection server daemon (68.220.241.50:34614). Mar 7 01:16:06.481763 sshd[6169]: Accepted publickey for core from 68.220.241.50 port 34614 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:06.484135 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:06.491126 systemd-logind[1570]: New session 12 of user core. Mar 7 01:16:06.498764 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:16:06.743424 sshd[6169]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:06.754501 systemd[1]: sshd@11-10.128.0.18:22-68.220.241.50:34614.service: Deactivated successfully. Mar 7 01:16:06.765102 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:16:06.766887 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:16:06.771328 systemd-logind[1570]: Removed session 12. Mar 7 01:16:06.796888 systemd[1]: Started sshd@12-10.128.0.18:22-68.220.241.50:34616.service - OpenSSH per-connection server daemon (68.220.241.50:34616). Mar 7 01:16:07.051140 sshd[6184]: Accepted publickey for core from 68.220.241.50 port 34616 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:07.052033 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:07.061435 systemd-logind[1570]: New session 13 of user core. Mar 7 01:16:07.066830 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:16:07.342373 sshd[6184]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:07.356772 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:16:07.358133 systemd[1]: sshd@12-10.128.0.18:22-68.220.241.50:34616.service: Deactivated successfully. Mar 7 01:16:07.384931 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:16:07.393628 systemd-logind[1570]: Removed session 13. Mar 7 01:16:07.416816 systemd[1]: Started sshd@13-10.128.0.18:22-68.220.241.50:34632.service - OpenSSH per-connection server daemon (68.220.241.50:34632). Mar 7 01:16:07.728925 sshd[6197]: Accepted publickey for core from 68.220.241.50 port 34632 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:07.731288 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:07.742435 systemd-logind[1570]: New session 14 of user core. Mar 7 01:16:07.746339 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:16:07.999057 sshd[6197]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:08.004645 systemd[1]: sshd@13-10.128.0.18:22-68.220.241.50:34632.service: Deactivated successfully. Mar 7 01:16:08.011622 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:16:08.012550 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:16:08.015535 systemd-logind[1570]: Removed session 14. Mar 7 01:16:13.036820 systemd[1]: Started sshd@14-10.128.0.18:22-68.220.241.50:38948.service - OpenSSH per-connection server daemon (68.220.241.50:38948). Mar 7 01:16:13.255541 sshd[6260]: Accepted publickey for core from 68.220.241.50 port 38948 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:13.257695 sshd[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:13.264478 systemd-logind[1570]: New session 15 of user core. Mar 7 01:16:13.266845 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:16:13.570657 sshd[6260]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:13.577204 systemd[1]: sshd@14-10.128.0.18:22-68.220.241.50:38948.service: Deactivated successfully. Mar 7 01:16:13.577745 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:16:13.585935 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:16:13.587902 systemd-logind[1570]: Removed session 15. Mar 7 01:16:13.611485 systemd[1]: Started sshd@15-10.128.0.18:22-68.220.241.50:38960.service - OpenSSH per-connection server daemon (68.220.241.50:38960). Mar 7 01:16:13.878790 sshd[6285]: Accepted publickey for core from 68.220.241.50 port 38960 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:13.880348 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:13.887171 systemd-logind[1570]: New session 16 of user core. Mar 7 01:16:13.894019 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:16:14.253306 sshd[6285]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:14.258211 systemd[1]: sshd@15-10.128.0.18:22-68.220.241.50:38960.service: Deactivated successfully. Mar 7 01:16:14.265885 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:16:14.268182 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:16:14.269811 systemd-logind[1570]: Removed session 16. Mar 7 01:16:14.293139 systemd[1]: Started sshd@16-10.128.0.18:22-68.220.241.50:38966.service - OpenSSH per-connection server daemon (68.220.241.50:38966). Mar 7 01:16:14.518179 sshd[6306]: Accepted publickey for core from 68.220.241.50 port 38966 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:14.520602 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:14.527988 systemd-logind[1570]: New session 17 of user core. Mar 7 01:16:14.535476 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:16:15.422347 sshd[6306]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:15.433955 systemd[1]: sshd@16-10.128.0.18:22-68.220.241.50:38966.service: Deactivated successfully. Mar 7 01:16:15.445827 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:16:15.447613 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:16:15.459760 systemd-logind[1570]: Removed session 17. Mar 7 01:16:15.467775 systemd[1]: Started sshd@17-10.128.0.18:22-68.220.241.50:38972.service - OpenSSH per-connection server daemon (68.220.241.50:38972). Mar 7 01:16:15.701284 sshd[6330]: Accepted publickey for core from 68.220.241.50 port 38972 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:15.703750 sshd[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:15.711627 systemd-logind[1570]: New session 18 of user core. Mar 7 01:16:15.717766 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:16:16.103257 sshd[6330]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:16.111122 systemd[1]: sshd@17-10.128.0.18:22-68.220.241.50:38972.service: Deactivated successfully. Mar 7 01:16:16.116077 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:16:16.117018 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:16:16.119266 systemd-logind[1570]: Removed session 18. Mar 7 01:16:16.142829 systemd[1]: Started sshd@18-10.128.0.18:22-68.220.241.50:38986.service - OpenSSH per-connection server daemon (68.220.241.50:38986). Mar 7 01:16:16.369015 sshd[6344]: Accepted publickey for core from 68.220.241.50 port 38986 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:16.370686 sshd[6344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:16.377177 systemd-logind[1570]: New session 19 of user core. Mar 7 01:16:16.384837 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:16:16.618690 sshd[6344]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:16.631957 systemd[1]: sshd@18-10.128.0.18:22-68.220.241.50:38986.service: Deactivated successfully. Mar 7 01:16:16.637233 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:16:16.638512 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:16:16.640031 systemd-logind[1570]: Removed session 19. Mar 7 01:16:21.659237 systemd[1]: Started sshd@19-10.128.0.18:22-68.220.241.50:38994.service - OpenSSH per-connection server daemon (68.220.241.50:38994). Mar 7 01:16:21.895701 sshd[6358]: Accepted publickey for core from 68.220.241.50 port 38994 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:21.896764 sshd[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:21.908017 systemd-logind[1570]: New session 20 of user core. Mar 7 01:16:21.913808 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:16:22.144501 sshd[6358]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:22.150700 systemd[1]: sshd@19-10.128.0.18:22-68.220.241.50:38994.service: Deactivated successfully. Mar 7 01:16:22.158172 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:16:22.159612 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:16:22.161561 systemd-logind[1570]: Removed session 20. Mar 7 01:16:27.182258 systemd[1]: Started sshd@20-10.128.0.18:22-68.220.241.50:51336.service - OpenSSH per-connection server daemon (68.220.241.50:51336). Mar 7 01:16:27.425197 sshd[6396]: Accepted publickey for core from 68.220.241.50 port 51336 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:27.427317 sshd[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:27.434560 systemd-logind[1570]: New session 21 of user core. Mar 7 01:16:27.437809 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:16:27.718302 sshd[6396]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:27.732516 systemd[1]: sshd@20-10.128.0.18:22-68.220.241.50:51336.service: Deactivated successfully. Mar 7 01:16:27.734874 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:16:27.748833 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:16:27.755096 systemd-logind[1570]: Removed session 21. Mar 7 01:16:32.753776 systemd[1]: Started sshd@21-10.128.0.18:22-68.220.241.50:44408.service - OpenSSH per-connection server daemon (68.220.241.50:44408). Mar 7 01:16:32.967996 sshd[6410]: Accepted publickey for core from 68.220.241.50 port 44408 ssh2: RSA SHA256:jdUW2SiGvDHde8/j8buAnRgGZcGJNqk50qNgNNnHf0M Mar 7 01:16:32.970013 sshd[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:32.976688 systemd-logind[1570]: New session 22 of user core. Mar 7 01:16:32.984822 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:16:33.219679 sshd[6410]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:33.225437 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:16:33.226998 systemd[1]: sshd@21-10.128.0.18:22-68.220.241.50:44408.service: Deactivated successfully. Mar 7 01:16:33.232593 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:16:33.234449 systemd-logind[1570]: Removed session 22.