Mar 14 00:23:56.110728 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:23:56.110809 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:23:56.110827 kernel: BIOS-provided physical RAM map: Mar 14 00:23:56.110841 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 14 00:23:56.110854 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 14 00:23:56.110868 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 14 00:23:56.110884 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 14 00:23:56.110903 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 14 00:23:56.110918 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Mar 14 00:23:56.110932 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Mar 14 00:23:56.110947 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Mar 14 00:23:56.110961 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Mar 14 00:23:56.110975 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 14 00:23:56.110989 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 14 00:23:56.111012 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 14 00:23:56.111029 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 14 00:23:56.111045 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 14 00:23:56.111062 kernel: NX (Execute Disable) protection: active Mar 14 00:23:56.111079 kernel: APIC: Static calls initialized Mar 14 00:23:56.111096 kernel: efi: EFI v2.7 by EDK II Mar 14 00:23:56.111114 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Mar 14 00:23:56.111130 kernel: SMBIOS 2.4 present. Mar 14 00:23:56.111145 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Mar 14 00:23:56.111160 kernel: Hypervisor detected: KVM Mar 14 00:23:56.111180 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:23:56.111197 kernel: kvm-clock: using sched offset of 12985412776 cycles Mar 14 00:23:56.111215 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:23:56.111233 kernel: tsc: Detected 2299.998 MHz processor Mar 14 00:23:56.111250 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:23:56.111267 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:23:56.111284 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 14 00:23:56.111302 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 14 00:23:56.111319 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:23:56.111365 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 14 00:23:56.111382 kernel: Using GB pages for direct mapping Mar 14 00:23:56.111408 kernel: Secure boot disabled Mar 14 00:23:56.111424 kernel: ACPI: Early table checksum verification disabled Mar 14 00:23:56.111441 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 14 00:23:56.111458 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 14 00:23:56.111476 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 14 00:23:56.111502 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 14 00:23:56.111524 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 14 00:23:56.111541 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Mar 14 00:23:56.111560 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 14 00:23:56.111579 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 14 00:23:56.111597 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 14 00:23:56.111615 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 14 00:23:56.111637 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 14 00:23:56.111655 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 14 00:23:56.111673 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 14 00:23:56.111691 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 14 00:23:56.111710 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 14 00:23:56.111727 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 14 00:23:56.111745 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 14 00:23:56.111763 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 14 00:23:56.111781 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 14 00:23:56.111803 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 14 00:23:56.111821 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 00:23:56.111839 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 00:23:56.111858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 14 00:23:56.111876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 14 00:23:56.111894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 14 00:23:56.111914 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 14 00:23:56.111933 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 14 00:23:56.111951 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Mar 14 00:23:56.111972 kernel: Zone ranges: Mar 14 00:23:56.111989 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:23:56.112008 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 14 00:23:56.112027 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 14 00:23:56.112045 kernel: Movable zone start for each node Mar 14 00:23:56.112063 kernel: Early memory node ranges Mar 14 00:23:56.112081 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 14 00:23:56.112100 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 14 00:23:56.112118 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Mar 14 00:23:56.112140 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 14 00:23:56.112157 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 14 00:23:56.112176 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 14 00:23:56.112194 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:23:56.112212 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 14 00:23:56.112230 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 14 00:23:56.112248 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 14 00:23:56.112266 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 14 00:23:56.112284 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 14 00:23:56.112307 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:23:56.112325 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:23:56.112358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:23:56.112377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:23:56.112394 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:23:56.112419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:23:56.112438 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:23:56.112456 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:23:56.112474 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 14 00:23:56.112497 kernel: Booting paravirtualized kernel on KVM Mar 14 00:23:56.112516 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:23:56.112534 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:23:56.112551 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:23:56.112569 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:23:56.112586 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:23:56.112604 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:23:56.112621 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:23:56.112642 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:23:56.112665 kernel: random: crng init done Mar 14 00:23:56.112683 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 14 00:23:56.112701 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:23:56.112719 kernel: Fallback order for Node 0: 0 Mar 14 00:23:56.112738 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Mar 14 00:23:56.112756 kernel: Policy zone: Normal Mar 14 00:23:56.112774 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:23:56.112792 kernel: software IO TLB: area num 2. Mar 14 00:23:56.112810 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347148K reserved, 0K cma-reserved) Mar 14 00:23:56.112833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:23:56.112852 kernel: Kernel/User page tables isolation: enabled Mar 14 00:23:56.112870 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:23:56.112888 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:23:56.112906 kernel: Dynamic Preempt: voluntary Mar 14 00:23:56.112923 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:23:56.112941 kernel: rcu: RCU event tracing is enabled. Mar 14 00:23:56.112959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:23:56.112996 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:23:56.113014 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:23:56.113033 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:23:56.113056 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:23:56.113075 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:23:56.113093 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:23:56.113112 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:23:56.113131 kernel: Console: colour dummy device 80x25 Mar 14 00:23:56.113153 kernel: printk: console [ttyS0] enabled Mar 14 00:23:56.113170 kernel: ACPI: Core revision 20230628 Mar 14 00:23:56.113187 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:23:56.113203 kernel: x2apic enabled Mar 14 00:23:56.113219 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:23:56.113236 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 14 00:23:56.113256 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 14 00:23:56.113275 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 14 00:23:56.113295 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 14 00:23:56.113319 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 14 00:23:56.114390 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:23:56.114423 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 14 00:23:56.114442 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 14 00:23:56.114458 kernel: Spectre V2 : Mitigation: IBRS Mar 14 00:23:56.114475 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:23:56.114493 kernel: RETBleed: Mitigation: IBRS Mar 14 00:23:56.114512 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 14 00:23:56.114529 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 14 00:23:56.114555 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 14 00:23:56.114574 kernel: MDS: Mitigation: Clear CPU buffers Mar 14 00:23:56.114591 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:23:56.114609 kernel: active return thunk: its_return_thunk Mar 14 00:23:56.114629 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 00:23:56.114649 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:23:56.114668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:23:56.114688 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:23:56.114707 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:23:56.114732 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 14 00:23:56.114753 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:23:56.114772 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:23:56.114792 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:23:56.114810 kernel: landlock: Up and running. Mar 14 00:23:56.114828 kernel: SELinux: Initializing. Mar 14 00:23:56.114849 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:23:56.114869 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:23:56.114889 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 14 00:23:56.114914 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:23:56.114934 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:23:56.114954 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:23:56.114974 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 14 00:23:56.114995 kernel: signal: max sigframe size: 1776 Mar 14 00:23:56.115014 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:23:56.115032 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:23:56.115049 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:23:56.115065 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:23:56.115089 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:23:56.115105 kernel: .... node #0, CPUs: #1 Mar 14 00:23:56.115123 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 14 00:23:56.115152 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 14 00:23:56.115170 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:23:56.115187 kernel: smpboot: Max logical packages: 1 Mar 14 00:23:56.115206 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 14 00:23:56.115225 kernel: devtmpfs: initialized Mar 14 00:23:56.115249 kernel: x86/mm: Memory block size: 128MB Mar 14 00:23:56.115269 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 14 00:23:56.115289 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:23:56.115305 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:23:56.115324 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:23:56.116504 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:23:56.116528 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:23:56.116549 kernel: audit: type=2000 audit(1773447834.423:1): state=initialized audit_enabled=0 res=1 Mar 14 00:23:56.116569 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:23:56.116603 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:23:56.116623 kernel: cpuidle: using governor menu Mar 14 00:23:56.116643 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:23:56.116663 kernel: dca service started, version 1.12.1 Mar 14 00:23:56.116683 kernel: PCI: Using configuration type 1 for base access Mar 14 00:23:56.116703 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:23:56.116723 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:23:56.116742 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:23:56.116762 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:23:56.116786 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:23:56.116805 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:23:56.116825 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:23:56.116845 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:23:56.116864 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 14 00:23:56.116890 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:23:56.116910 kernel: ACPI: Interpreter enabled Mar 14 00:23:56.116929 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:23:56.116949 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:23:56.116974 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:23:56.116993 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 14 00:23:56.117013 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 14 00:23:56.117033 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:23:56.117307 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:23:56.117563 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 14 00:23:56.117753 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 14 00:23:56.117784 kernel: PCI host bridge to bus 0000:00 Mar 14 00:23:56.117970 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:23:56.118144 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:23:56.118314 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:23:56.118681 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 14 00:23:56.118869 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:23:56.119095 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 14 00:23:56.119329 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 14 00:23:56.119589 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 14 00:23:56.119820 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 14 00:23:56.120021 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 14 00:23:56.120222 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 14 00:23:56.120922 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 14 00:23:56.121137 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:23:56.121326 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 14 00:23:56.121541 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 14 00:23:56.121732 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:23:56.121914 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 14 00:23:56.122097 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 14 00:23:56.122120 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:23:56.122145 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:23:56.122171 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:23:56.122190 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:23:56.122208 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 14 00:23:56.122227 kernel: iommu: Default domain type: Translated Mar 14 00:23:56.122246 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:23:56.122264 kernel: efivars: Registered efivars operations Mar 14 00:23:56.122283 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:23:56.122302 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:23:56.122324 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 14 00:23:56.124260 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 14 00:23:56.124282 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 14 00:23:56.124301 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 14 00:23:56.124320 kernel: vgaarb: loaded Mar 14 00:23:56.124370 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:23:56.124390 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:23:56.124416 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:23:56.124435 kernel: pnp: PnP ACPI init Mar 14 00:23:56.124460 kernel: pnp: PnP ACPI: found 7 devices Mar 14 00:23:56.124480 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:23:56.124498 kernel: NET: Registered PF_INET protocol family Mar 14 00:23:56.124517 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 00:23:56.124536 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 14 00:23:56.124555 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:23:56.124574 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:23:56.124593 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 14 00:23:56.124611 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 14 00:23:56.124633 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 14 00:23:56.124652 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 14 00:23:56.124670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:23:56.124689 kernel: NET: Registered PF_XDP protocol family Mar 14 00:23:56.124888 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:23:56.125057 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:23:56.125239 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:23:56.125435 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 14 00:23:56.125628 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 14 00:23:56.125652 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:23:56.125672 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:23:56.125690 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 14 00:23:56.125709 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 00:23:56.125728 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 14 00:23:56.125747 kernel: clocksource: Switched to clocksource tsc Mar 14 00:23:56.125766 kernel: Initialise system trusted keyrings Mar 14 00:23:56.125788 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 14 00:23:56.125807 kernel: Key type asymmetric registered Mar 14 00:23:56.125825 kernel: Asymmetric key parser 'x509' registered Mar 14 00:23:56.125843 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:23:56.125863 kernel: io scheduler mq-deadline registered Mar 14 00:23:56.125880 kernel: io scheduler kyber registered Mar 14 00:23:56.125898 kernel: io scheduler bfq registered Mar 14 00:23:56.125916 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:23:56.125936 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 14 00:23:56.126133 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 14 00:23:56.126158 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 14 00:23:56.126898 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 14 00:23:56.126933 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 14 00:23:56.127155 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 14 00:23:56.127190 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:23:56.127211 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:23:56.127232 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 14 00:23:56.127252 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 14 00:23:56.127278 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 14 00:23:56.127530 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 14 00:23:56.127558 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:23:56.127577 kernel: i8042: Warning: Keylock active Mar 14 00:23:56.127596 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:23:56.127624 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:23:56.127832 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 14 00:23:56.128017 kernel: rtc_cmos 00:00: registered as rtc0 Mar 14 00:23:56.128200 kernel: rtc_cmos 00:00: setting system clock to 2026-03-14T00:23:55 UTC (1773447835) Mar 14 00:23:56.128590 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 14 00:23:56.128619 kernel: intel_pstate: CPU model not supported Mar 14 00:23:56.128645 kernel: pstore: Using crash dump compression: deflate Mar 14 00:23:56.128665 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:23:56.128684 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:23:56.128703 kernel: Segment Routing with IPv6 Mar 14 00:23:56.128723 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:23:56.128749 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:23:56.128768 kernel: Key type dns_resolver registered Mar 14 00:23:56.128787 kernel: IPI shorthand broadcast: enabled Mar 14 00:23:56.128806 kernel: sched_clock: Marking stable (873004150, 147205188)->(1062447234, -42237896) Mar 14 00:23:56.128825 kernel: registered taskstats version 1 Mar 14 00:23:56.128844 kernel: Loading compiled-in X.509 certificates Mar 14 00:23:56.128863 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:23:56.128882 kernel: Key type .fscrypt registered Mar 14 00:23:56.128901 kernel: Key type fscrypt-provisioning registered Mar 14 00:23:56.128924 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:23:56.128943 kernel: ima: No architecture policies found Mar 14 00:23:56.128962 kernel: clk: Disabling unused clocks Mar 14 00:23:56.128981 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:23:56.129000 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:23:56.129019 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:23:56.129038 kernel: Run /init as init process Mar 14 00:23:56.129057 kernel: with arguments: Mar 14 00:23:56.129076 kernel: /init Mar 14 00:23:56.129098 kernel: with environment: Mar 14 00:23:56.129117 kernel: HOME=/ Mar 14 00:23:56.129135 kernel: TERM=linux Mar 14 00:23:56.129154 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:23:56.129177 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:23:56.129200 systemd[1]: Detected virtualization google. Mar 14 00:23:56.129220 systemd[1]: Detected architecture x86-64. Mar 14 00:23:56.129243 systemd[1]: Running in initrd. Mar 14 00:23:56.129262 systemd[1]: No hostname configured, using default hostname. Mar 14 00:23:56.129281 systemd[1]: Hostname set to . Mar 14 00:23:56.129302 systemd[1]: Initializing machine ID from random generator. Mar 14 00:23:56.129322 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:23:56.129355 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:23:56.129372 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:23:56.129390 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:23:56.129422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:23:56.129441 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:23:56.129460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:23:56.129483 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:23:56.129502 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:23:56.129524 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:23:56.129542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:23:56.129564 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:23:56.129583 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:23:56.129621 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:23:56.129645 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:23:56.129666 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:23:56.129689 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:23:56.129713 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:23:56.129731 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:23:56.129749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:23:56.129769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:23:56.129789 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:23:56.129809 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:23:56.129831 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:23:56.129851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:23:56.129873 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:23:56.129898 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:23:56.129919 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:23:56.129941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:23:56.129963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:23:56.130020 systemd-journald[184]: Collecting audit messages is disabled. Mar 14 00:23:56.130063 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:23:56.130083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:23:56.130104 systemd-journald[184]: Journal started Mar 14 00:23:56.130146 systemd-journald[184]: Runtime Journal (/run/log/journal/622bb131f22e49adb645aa46a6fcf463) is 8.0M, max 148.7M, 140.7M free. Mar 14 00:23:56.132357 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:23:56.133731 systemd-modules-load[185]: Inserted module 'overlay' Mar 14 00:23:56.135037 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:23:56.145716 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:23:56.153624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:23:56.160044 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:23:56.169647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:23:56.178520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:23:56.199616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:23:56.224524 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:23:56.224581 kernel: Bridge firewalling registered Mar 14 00:23:56.203853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:23:56.209730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:23:56.212953 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 14 00:23:56.214900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:23:56.226616 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:23:56.253176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:23:56.263600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:23:56.270641 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:23:56.289575 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:23:56.311993 systemd-resolved[215]: Positive Trust Anchors: Mar 14 00:23:56.312013 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:23:56.322583 dracut-cmdline[219]: dracut-dracut-053 Mar 14 00:23:56.322583 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:23:56.312084 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:23:56.319592 systemd-resolved[215]: Defaulting to hostname 'linux'. Mar 14 00:23:56.321321 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:23:56.326894 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:23:56.422385 kernel: SCSI subsystem initialized Mar 14 00:23:56.434383 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:23:56.447750 kernel: iscsi: registered transport (tcp) Mar 14 00:23:56.473376 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:23:56.473461 kernel: QLogic iSCSI HBA Driver Mar 14 00:23:56.526748 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:23:56.532584 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:23:56.576508 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:23:56.576599 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:23:56.576625 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:23:56.621372 kernel: raid6: avx2x4 gen() 17931 MB/s Mar 14 00:23:56.638369 kernel: raid6: avx2x2 gen() 17948 MB/s Mar 14 00:23:56.655936 kernel: raid6: avx2x1 gen() 13841 MB/s Mar 14 00:23:56.655991 kernel: raid6: using algorithm avx2x2 gen() 17948 MB/s Mar 14 00:23:56.673901 kernel: raid6: .... xor() 18019 MB/s, rmw enabled Mar 14 00:23:56.673960 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:23:56.697385 kernel: xor: automatically using best checksumming function avx Mar 14 00:23:56.871379 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:23:56.885313 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:23:56.894641 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:23:56.924516 systemd-udevd[401]: Using default interface naming scheme 'v255'. Mar 14 00:23:56.931598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:23:56.942627 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:23:56.978386 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Mar 14 00:23:57.018083 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:23:57.035563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:23:57.119870 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:23:57.132650 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:23:57.174921 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:23:57.184582 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:23:57.186986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:23:57.199497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:23:57.210903 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:23:57.263061 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:23:57.263174 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:23:57.263212 kernel: blk-mq: reduced tag depth to 10240 Mar 14 00:23:57.266259 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:23:57.293382 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 14 00:23:57.334167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:23:57.343426 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:23:57.357405 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:23:57.357483 kernel: AES CTR mode by8 optimization enabled Mar 14 00:23:57.357970 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:23:57.362437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:23:57.362710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:23:57.364931 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:23:57.377798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:23:57.414905 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Mar 14 00:23:57.415255 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 14 00:23:57.418414 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 14 00:23:57.418763 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 14 00:23:57.421378 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:23:57.422647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:23:57.433032 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:23:57.433076 kernel: GPT:17805311 != 33554431 Mar 14 00:23:57.433102 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:23:57.433126 kernel: GPT:17805311 != 33554431 Mar 14 00:23:57.433150 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:23:57.433185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:57.436589 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:23:57.440591 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 14 00:23:57.507819 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:23:57.551508 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (471) Mar 14 00:23:57.551569 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (457) Mar 14 00:23:57.548778 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 14 00:23:57.574582 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 14 00:23:57.587176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 14 00:23:57.621104 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 14 00:23:57.645524 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 14 00:23:57.652598 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:23:57.689721 disk-uuid[552]: Primary Header is updated. Mar 14 00:23:57.689721 disk-uuid[552]: Secondary Entries is updated. Mar 14 00:23:57.689721 disk-uuid[552]: Secondary Header is updated. Mar 14 00:23:57.715460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:57.726372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:57.750397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:58.746598 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:58.747386 disk-uuid[553]: The operation has completed successfully. Mar 14 00:23:58.821190 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:23:58.821371 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:23:58.856571 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:23:58.881024 sh[570]: Success Mar 14 00:23:58.905402 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 14 00:23:59.001393 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:23:59.009515 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:23:59.032931 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:23:59.083787 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:23:59.083882 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:23:59.083924 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:23:59.093400 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:23:59.106010 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:23:59.128372 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:23:59.137084 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:23:59.153385 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:23:59.158589 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:23:59.177639 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:23:59.219382 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:23:59.233604 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:23:59.233693 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:23:59.252249 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:23:59.252382 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:23:59.269778 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:23:59.287583 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:23:59.288924 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:23:59.299644 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:23:59.342072 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:23:59.381641 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:23:59.496482 systemd-networkd[752]: lo: Link UP Mar 14 00:23:59.496494 systemd-networkd[752]: lo: Gained carrier Mar 14 00:23:59.503868 ignition[703]: Ignition 2.19.0 Mar 14 00:23:59.498973 systemd-networkd[752]: Enumeration completed Mar 14 00:23:59.503876 ignition[703]: Stage: fetch-offline Mar 14 00:23:59.499132 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:23:59.503960 ignition[703]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:23:59.500054 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:23:59.503973 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:23:59.500061 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:23:59.504102 ignition[703]: parsed url from cmdline: "" Mar 14 00:23:59.502496 systemd-networkd[752]: eth0: Link UP Mar 14 00:23:59.504107 ignition[703]: no config URL provided Mar 14 00:23:59.502503 systemd-networkd[752]: eth0: Gained carrier Mar 14 00:23:59.504113 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:23:59.502516 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:23:59.504126 ignition[703]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:23:59.510434 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:23:59.504134 ignition[703]: failed to fetch config: resource requires networking Mar 14 00:23:59.510451 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 14 00:23:59.504588 ignition[703]: Ignition finished successfully Mar 14 00:23:59.516034 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:23:59.597724 ignition[761]: Ignition 2.19.0 Mar 14 00:23:59.534413 systemd[1]: Reached target network.target - Network. Mar 14 00:23:59.597734 ignition[761]: Stage: fetch Mar 14 00:23:59.552567 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:23:59.597940 ignition[761]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:23:59.609420 unknown[761]: fetched base config from "system" Mar 14 00:23:59.597955 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:23:59.609433 unknown[761]: fetched base config from "system" Mar 14 00:23:59.598082 ignition[761]: parsed url from cmdline: "" Mar 14 00:23:59.609442 unknown[761]: fetched user config from "gcp" Mar 14 00:23:59.598090 ignition[761]: no config URL provided Mar 14 00:23:59.612048 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:23:59.598097 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:23:59.631590 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:23:59.598108 ignition[761]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:23:59.660866 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:23:59.598132 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 14 00:23:59.680574 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:23:59.602181 ignition[761]: GET result: OK Mar 14 00:23:59.723965 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:23:59.602284 ignition[761]: parsing config with SHA512: 2bfd635d4b5b537aca430f50aa739eb5466df5f2db6215a30aef6397c6fb7c9e0ec7a7fd395a2e0ee7b7c53216b488b1d3b887f70e216e16ea6e296969f352da Mar 14 00:23:59.763400 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:23:59.610095 ignition[761]: fetch: fetch complete Mar 14 00:23:59.786526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:23:59.610106 ignition[761]: fetch: fetch passed Mar 14 00:23:59.803542 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:23:59.610182 ignition[761]: Ignition finished successfully Mar 14 00:23:59.821539 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:23:59.658184 ignition[767]: Ignition 2.19.0 Mar 14 00:23:59.835536 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:23:59.658194 ignition[767]: Stage: kargs Mar 14 00:23:59.857602 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:23:59.658460 ignition[767]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:23:59.658479 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:23:59.659605 ignition[767]: kargs: kargs passed Mar 14 00:23:59.659664 ignition[767]: Ignition finished successfully Mar 14 00:23:59.721232 ignition[772]: Ignition 2.19.0 Mar 14 00:23:59.721242 ignition[772]: Stage: disks Mar 14 00:23:59.721527 ignition[772]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:23:59.721559 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:23:59.722718 ignition[772]: disks: disks passed Mar 14 00:23:59.722786 ignition[772]: Ignition finished successfully Mar 14 00:23:59.926707 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:24:00.115521 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:24:00.148541 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:24:00.270373 kernel: EXT4-fs (sda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:24:00.271151 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:24:00.272103 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:24:00.307522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:24:00.320519 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:24:00.397538 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Mar 14 00:24:00.397589 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:24:00.397617 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:24:00.397639 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:24:00.397661 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:24:00.397684 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:24:00.362939 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:24:00.363004 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:24:00.363041 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:24:00.394695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:24:00.423465 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:24:00.454587 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:24:00.591353 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:24:00.601848 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:24:00.611518 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:24:00.621531 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:24:00.766840 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:24:00.797521 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:24:00.822511 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:24:00.800726 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:24:00.841632 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:24:00.881895 ignition[901]: INFO : Ignition 2.19.0 Mar 14 00:24:00.882469 ignition[901]: INFO : Stage: mount Mar 14 00:24:00.882161 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:24:00.921522 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:24:00.921522 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:24:00.921522 ignition[901]: INFO : mount: mount passed Mar 14 00:24:00.921522 ignition[901]: INFO : Ignition finished successfully Mar 14 00:24:00.896956 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:24:00.914750 systemd-networkd[752]: eth0: Gained IPv6LL Mar 14 00:24:00.923494 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:24:01.280655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:24:01.314611 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Mar 14 00:24:01.314674 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:24:01.314700 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:24:01.328141 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:24:01.345069 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:24:01.345167 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:24:01.348845 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:24:01.389771 ignition[930]: INFO : Ignition 2.19.0 Mar 14 00:24:01.389771 ignition[930]: INFO : Stage: files Mar 14 00:24:01.404519 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:24:01.404519 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:24:01.404519 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:24:01.404519 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:24:01.404519 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:24:01.404519 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:24:01.404519 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:24:01.404519 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:24:01.404405 unknown[930]: wrote ssh authorized keys file for user: core Mar 14 00:24:01.506509 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:24:01.506509 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:24:01.548461 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:24:01.810893 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:24:01.810893 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:24:02.240859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:24:02.814546 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:24:02.814546 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:24:02.853541 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:24:02.853541 ignition[930]: INFO : files: files passed Mar 14 00:24:02.853541 ignition[930]: INFO : Ignition finished successfully Mar 14 00:24:02.819749 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:24:02.840714 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:24:02.877695 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:24:02.895160 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:24:03.063519 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:24:03.063519 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:24:02.895292 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:24:03.112662 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:24:02.962730 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:24:02.977905 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:24:03.000578 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:24:03.080839 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:24:03.080967 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:24:03.103441 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:24:03.122541 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:24:03.143729 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:24:03.149586 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:24:03.227728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:24:03.256623 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:24:03.294454 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:24:03.306821 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:24:03.336766 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:24:03.337185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:24:03.337415 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:24:03.381821 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:24:03.398783 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:24:03.399228 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:24:03.432771 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:24:03.433187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:24:03.470722 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:24:03.471233 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:24:03.487995 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:24:03.525720 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:24:03.526151 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:24:03.542931 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:24:03.543137 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:24:03.583753 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:24:03.584130 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:24:03.622607 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:24:03.623001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:24:03.642767 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:24:03.642935 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:24:03.672936 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:24:03.673170 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:24:03.682956 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:24:03.683157 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:24:03.729624 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:24:03.740679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:24:03.778621 ignition[983]: INFO : Ignition 2.19.0 Mar 14 00:24:03.778621 ignition[983]: INFO : Stage: umount Mar 14 00:24:03.778621 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:24:03.778621 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:24:03.778621 ignition[983]: INFO : umount: umount passed Mar 14 00:24:03.778621 ignition[983]: INFO : Ignition finished successfully Mar 14 00:24:03.740953 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:24:03.774698 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:24:03.786500 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:24:03.786824 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:24:03.811037 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:24:03.811239 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:24:03.851367 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:24:03.852408 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:24:03.852529 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:24:03.867328 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:24:03.867491 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:24:03.890719 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:24:03.890856 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:24:03.909267 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:24:03.909376 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:24:03.928744 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:24:03.928847 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:24:03.948726 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:24:03.948805 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:24:03.968736 systemd[1]: Stopped target network.target - Network. Mar 14 00:24:03.993673 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:24:03.993785 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:24:04.001852 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:24:04.027638 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:24:04.032458 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:24:04.035747 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:24:04.053717 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:24:04.068765 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:24:04.068834 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:24:04.086795 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:24:04.086865 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:24:04.120605 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:24:04.120859 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:24:04.139793 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:24:04.139877 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:24:04.147797 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:24:04.147881 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:24:04.182003 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:24:04.187478 systemd-networkd[752]: eth0: DHCPv6 lease lost Mar 14 00:24:04.199911 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:24:04.231142 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:24:04.231290 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:24:04.250545 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:24:04.250839 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:24:04.260441 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:24:04.260512 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:24:04.303489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:24:04.321464 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:24:04.321693 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:24:04.340751 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:24:04.340858 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:24:04.359716 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:24:04.359819 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:24:04.369874 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:24:04.369947 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:24:04.386952 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:24:04.426087 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:24:04.426269 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:24:04.454110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:24:04.454184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:24:04.485796 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:24:04.485856 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:24:04.502712 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:24:04.839539 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 14 00:24:04.502814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:24:04.546691 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:24:04.546816 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:24:04.573811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:24:04.573898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:24:04.628592 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:24:04.650489 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:24:04.650706 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:24:04.671616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:24:04.671711 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:24:04.693191 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:24:04.693322 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:24:04.712937 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:24:04.713063 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:24:04.735935 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:24:04.761595 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:24:04.799196 systemd[1]: Switching root. Mar 14 00:24:05.006522 systemd-journald[184]: Journal stopped Mar 14 00:23:56.110728 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:23:56.110809 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:23:56.110827 kernel: BIOS-provided physical RAM map: Mar 14 00:23:56.110841 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 14 00:23:56.110854 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 14 00:23:56.110868 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 14 00:23:56.110884 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 14 00:23:56.110903 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 14 00:23:56.110918 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Mar 14 00:23:56.110932 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Mar 14 00:23:56.110947 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Mar 14 00:23:56.110961 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Mar 14 00:23:56.110975 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 14 00:23:56.110989 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 14 00:23:56.111012 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 14 00:23:56.111029 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 14 00:23:56.111045 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 14 00:23:56.111062 kernel: NX (Execute Disable) protection: active Mar 14 00:23:56.111079 kernel: APIC: Static calls initialized Mar 14 00:23:56.111096 kernel: efi: EFI v2.7 by EDK II Mar 14 00:23:56.111114 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Mar 14 00:23:56.111130 kernel: SMBIOS 2.4 present. Mar 14 00:23:56.111145 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Mar 14 00:23:56.111160 kernel: Hypervisor detected: KVM Mar 14 00:23:56.111180 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:23:56.111197 kernel: kvm-clock: using sched offset of 12985412776 cycles Mar 14 00:23:56.111215 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:23:56.111233 kernel: tsc: Detected 2299.998 MHz processor Mar 14 00:23:56.111250 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:23:56.111267 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:23:56.111284 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 14 00:23:56.111302 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 14 00:23:56.111319 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:23:56.111365 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 14 00:23:56.111382 kernel: Using GB pages for direct mapping Mar 14 00:23:56.111408 kernel: Secure boot disabled Mar 14 00:23:56.111424 kernel: ACPI: Early table checksum verification disabled Mar 14 00:23:56.111441 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 14 00:23:56.111458 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 14 00:23:56.111476 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 14 00:23:56.111502 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 14 00:23:56.111524 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 14 00:23:56.111541 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Mar 14 00:23:56.111560 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 14 00:23:56.111579 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 14 00:23:56.111597 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 14 00:23:56.111615 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 14 00:23:56.111637 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 14 00:23:56.111655 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 14 00:23:56.111673 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 14 00:23:56.111691 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 14 00:23:56.111710 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 14 00:23:56.111727 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 14 00:23:56.111745 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 14 00:23:56.111763 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 14 00:23:56.111781 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 14 00:23:56.111803 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 14 00:23:56.111821 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 00:23:56.111839 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 00:23:56.111858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 14 00:23:56.111876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 14 00:23:56.111894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 14 00:23:56.111914 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 14 00:23:56.111933 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 14 00:23:56.111951 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Mar 14 00:23:56.111972 kernel: Zone ranges: Mar 14 00:23:56.111989 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:23:56.112008 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 14 00:23:56.112027 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 14 00:23:56.112045 kernel: Movable zone start for each node Mar 14 00:23:56.112063 kernel: Early memory node ranges Mar 14 00:23:56.112081 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 14 00:23:56.112100 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 14 00:23:56.112118 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Mar 14 00:23:56.112140 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 14 00:23:56.112157 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 14 00:23:56.112176 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 14 00:23:56.112194 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:23:56.112212 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 14 00:23:56.112230 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 14 00:23:56.112248 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 14 00:23:56.112266 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 14 00:23:56.112284 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 14 00:23:56.112307 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:23:56.112325 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:23:56.112358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:23:56.112377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:23:56.112394 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:23:56.112419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:23:56.112438 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:23:56.112456 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:23:56.112474 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 14 00:23:56.112497 kernel: Booting paravirtualized kernel on KVM Mar 14 00:23:56.112516 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:23:56.112534 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:23:56.112551 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:23:56.112569 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:23:56.112586 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:23:56.112604 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:23:56.112621 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:23:56.112642 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:23:56.112665 kernel: random: crng init done Mar 14 00:23:56.112683 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 14 00:23:56.112701 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:23:56.112719 kernel: Fallback order for Node 0: 0 Mar 14 00:23:56.112738 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Mar 14 00:23:56.112756 kernel: Policy zone: Normal Mar 14 00:23:56.112774 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:23:56.112792 kernel: software IO TLB: area num 2. Mar 14 00:23:56.112810 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 347148K reserved, 0K cma-reserved) Mar 14 00:23:56.112833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:23:56.112852 kernel: Kernel/User page tables isolation: enabled Mar 14 00:23:56.112870 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:23:56.112888 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:23:56.112906 kernel: Dynamic Preempt: voluntary Mar 14 00:23:56.112923 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:23:56.112941 kernel: rcu: RCU event tracing is enabled. Mar 14 00:23:56.112959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:23:56.112996 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:23:56.113014 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:23:56.113033 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:23:56.113056 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:23:56.113075 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:23:56.113093 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:23:56.113112 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:23:56.113131 kernel: Console: colour dummy device 80x25 Mar 14 00:23:56.113153 kernel: printk: console [ttyS0] enabled Mar 14 00:23:56.113170 kernel: ACPI: Core revision 20230628 Mar 14 00:23:56.113187 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:23:56.113203 kernel: x2apic enabled Mar 14 00:23:56.113219 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:23:56.113236 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 14 00:23:56.113256 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 14 00:23:56.113275 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 14 00:23:56.113295 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 14 00:23:56.113319 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 14 00:23:56.114390 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:23:56.114423 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 14 00:23:56.114442 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 14 00:23:56.114458 kernel: Spectre V2 : Mitigation: IBRS Mar 14 00:23:56.114475 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:23:56.114493 kernel: RETBleed: Mitigation: IBRS Mar 14 00:23:56.114512 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 14 00:23:56.114529 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 14 00:23:56.114555 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 14 00:23:56.114574 kernel: MDS: Mitigation: Clear CPU buffers Mar 14 00:23:56.114591 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:23:56.114609 kernel: active return thunk: its_return_thunk Mar 14 00:23:56.114629 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 00:23:56.114649 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:23:56.114668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:23:56.114688 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:23:56.114707 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:23:56.114732 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 14 00:23:56.114753 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:23:56.114772 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:23:56.114792 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:23:56.114810 kernel: landlock: Up and running. Mar 14 00:23:56.114828 kernel: SELinux: Initializing. Mar 14 00:23:56.114849 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:23:56.114869 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:23:56.114889 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 14 00:23:56.114914 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:23:56.114934 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:23:56.114954 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:23:56.114974 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 14 00:23:56.114995 kernel: signal: max sigframe size: 1776 Mar 14 00:23:56.115014 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:23:56.115032 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:23:56.115049 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:23:56.115065 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:23:56.115089 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:23:56.115105 kernel: .... node #0, CPUs: #1 Mar 14 00:23:56.115123 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 14 00:23:56.115152 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 14 00:23:56.115170 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:23:56.115187 kernel: smpboot: Max logical packages: 1 Mar 14 00:23:56.115206 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 14 00:23:56.115225 kernel: devtmpfs: initialized Mar 14 00:23:56.115249 kernel: x86/mm: Memory block size: 128MB Mar 14 00:23:56.115269 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 14 00:23:56.115289 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:23:56.115305 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:23:56.115324 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:23:56.116504 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:23:56.116528 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:23:56.116549 kernel: audit: type=2000 audit(1773447834.423:1): state=initialized audit_enabled=0 res=1 Mar 14 00:23:56.116569 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:23:56.116603 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:23:56.116623 kernel: cpuidle: using governor menu Mar 14 00:23:56.116643 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:23:56.116663 kernel: dca service started, version 1.12.1 Mar 14 00:23:56.116683 kernel: PCI: Using configuration type 1 for base access Mar 14 00:23:56.116703 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:23:56.116723 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:23:56.116742 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:23:56.116762 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:23:56.116786 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:23:56.116805 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:23:56.116825 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:23:56.116845 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:23:56.116864 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 14 00:23:56.116890 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:23:56.116910 kernel: ACPI: Interpreter enabled Mar 14 00:23:56.116929 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:23:56.116949 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:23:56.116974 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:23:56.116993 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 14 00:23:56.117013 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 14 00:23:56.117033 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:23:56.117307 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:23:56.117563 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 14 00:23:56.117753 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 14 00:23:56.117784 kernel: PCI host bridge to bus 0000:00 Mar 14 00:23:56.117970 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:23:56.118144 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:23:56.118314 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:23:56.118681 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 14 00:23:56.118869 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:23:56.119095 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 14 00:23:56.119329 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 14 00:23:56.119589 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 14 00:23:56.119820 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 14 00:23:56.120021 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 14 00:23:56.120222 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 14 00:23:56.120922 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 14 00:23:56.121137 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:23:56.121326 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 14 00:23:56.121541 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 14 00:23:56.121732 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:23:56.121914 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 14 00:23:56.122097 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 14 00:23:56.122120 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:23:56.122145 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:23:56.122171 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:23:56.122190 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:23:56.122208 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 14 00:23:56.122227 kernel: iommu: Default domain type: Translated Mar 14 00:23:56.122246 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:23:56.122264 kernel: efivars: Registered efivars operations Mar 14 00:23:56.122283 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:23:56.122302 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:23:56.122324 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 14 00:23:56.124260 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 14 00:23:56.124282 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 14 00:23:56.124301 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 14 00:23:56.124320 kernel: vgaarb: loaded Mar 14 00:23:56.124370 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:23:56.124390 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:23:56.124416 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:23:56.124435 kernel: pnp: PnP ACPI init Mar 14 00:23:56.124460 kernel: pnp: PnP ACPI: found 7 devices Mar 14 00:23:56.124480 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:23:56.124498 kernel: NET: Registered PF_INET protocol family Mar 14 00:23:56.124517 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 00:23:56.124536 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 14 00:23:56.124555 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:23:56.124574 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:23:56.124593 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 14 00:23:56.124611 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 14 00:23:56.124633 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 14 00:23:56.124652 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 14 00:23:56.124670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:23:56.124689 kernel: NET: Registered PF_XDP protocol family Mar 14 00:23:56.124888 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:23:56.125057 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:23:56.125239 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:23:56.125435 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 14 00:23:56.125628 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 14 00:23:56.125652 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:23:56.125672 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:23:56.125690 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 14 00:23:56.125709 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 00:23:56.125728 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 14 00:23:56.125747 kernel: clocksource: Switched to clocksource tsc Mar 14 00:23:56.125766 kernel: Initialise system trusted keyrings Mar 14 00:23:56.125788 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 14 00:23:56.125807 kernel: Key type asymmetric registered Mar 14 00:23:56.125825 kernel: Asymmetric key parser 'x509' registered Mar 14 00:23:56.125843 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:23:56.125863 kernel: io scheduler mq-deadline registered Mar 14 00:23:56.125880 kernel: io scheduler kyber registered Mar 14 00:23:56.125898 kernel: io scheduler bfq registered Mar 14 00:23:56.125916 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:23:56.125936 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 14 00:23:56.126133 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 14 00:23:56.126158 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 14 00:23:56.126898 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 14 00:23:56.126933 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 14 00:23:56.127155 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 14 00:23:56.127190 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:23:56.127211 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:23:56.127232 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 14 00:23:56.127252 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 14 00:23:56.127278 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 14 00:23:56.127530 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 14 00:23:56.127558 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:23:56.127577 kernel: i8042: Warning: Keylock active Mar 14 00:23:56.127596 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:23:56.127624 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:23:56.127832 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 14 00:23:56.128017 kernel: rtc_cmos 00:00: registered as rtc0 Mar 14 00:23:56.128200 kernel: rtc_cmos 00:00: setting system clock to 2026-03-14T00:23:55 UTC (1773447835) Mar 14 00:23:56.128590 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 14 00:23:56.128619 kernel: intel_pstate: CPU model not supported Mar 14 00:23:56.128645 kernel: pstore: Using crash dump compression: deflate Mar 14 00:23:56.128665 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:23:56.128684 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:23:56.128703 kernel: Segment Routing with IPv6 Mar 14 00:23:56.128723 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:23:56.128749 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:23:56.128768 kernel: Key type dns_resolver registered Mar 14 00:23:56.128787 kernel: IPI shorthand broadcast: enabled Mar 14 00:23:56.128806 kernel: sched_clock: Marking stable (873004150, 147205188)->(1062447234, -42237896) Mar 14 00:23:56.128825 kernel: registered taskstats version 1 Mar 14 00:23:56.128844 kernel: Loading compiled-in X.509 certificates Mar 14 00:23:56.128863 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:23:56.128882 kernel: Key type .fscrypt registered Mar 14 00:23:56.128901 kernel: Key type fscrypt-provisioning registered Mar 14 00:23:56.128924 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:23:56.128943 kernel: ima: No architecture policies found Mar 14 00:23:56.128962 kernel: clk: Disabling unused clocks Mar 14 00:23:56.128981 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:23:56.129000 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:23:56.129019 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:23:56.129038 kernel: Run /init as init process Mar 14 00:23:56.129057 kernel: with arguments: Mar 14 00:23:56.129076 kernel: /init Mar 14 00:23:56.129098 kernel: with environment: Mar 14 00:23:56.129117 kernel: HOME=/ Mar 14 00:23:56.129135 kernel: TERM=linux Mar 14 00:23:56.129154 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:23:56.129177 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:23:56.129200 systemd[1]: Detected virtualization google. Mar 14 00:23:56.129220 systemd[1]: Detected architecture x86-64. Mar 14 00:23:56.129243 systemd[1]: Running in initrd. Mar 14 00:23:56.129262 systemd[1]: No hostname configured, using default hostname. Mar 14 00:23:56.129281 systemd[1]: Hostname set to . Mar 14 00:23:56.129302 systemd[1]: Initializing machine ID from random generator. Mar 14 00:23:56.129322 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:23:56.129355 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:23:56.129372 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:23:56.129390 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:23:56.129422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:23:56.129441 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:23:56.129460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:23:56.129483 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:23:56.129502 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:23:56.129524 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:23:56.129542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:23:56.129564 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:23:56.129583 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:23:56.129621 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:23:56.129645 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:23:56.129666 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:23:56.129689 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:23:56.129713 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:23:56.129731 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:23:56.129749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:23:56.129769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:23:56.129789 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:23:56.129809 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:23:56.129831 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:23:56.129851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:23:56.129873 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:23:56.129898 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:23:56.129919 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:23:56.129941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:23:56.129963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:23:56.130020 systemd-journald[184]: Collecting audit messages is disabled. Mar 14 00:23:56.130063 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:23:56.130083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:23:56.130104 systemd-journald[184]: Journal started Mar 14 00:23:56.130146 systemd-journald[184]: Runtime Journal (/run/log/journal/622bb131f22e49adb645aa46a6fcf463) is 8.0M, max 148.7M, 140.7M free. Mar 14 00:23:56.132357 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:23:56.133731 systemd-modules-load[185]: Inserted module 'overlay' Mar 14 00:23:56.135037 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:23:56.145716 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:23:56.153624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:23:56.160044 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:23:56.169647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:23:56.178520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:23:56.199616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:23:56.224524 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:23:56.224581 kernel: Bridge firewalling registered Mar 14 00:23:56.203853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:23:56.209730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:23:56.212953 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 14 00:23:56.214900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:23:56.226616 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:23:56.253176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:23:56.263600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:23:56.270641 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:23:56.289575 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:23:56.311993 systemd-resolved[215]: Positive Trust Anchors: Mar 14 00:23:56.312013 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:23:56.322583 dracut-cmdline[219]: dracut-dracut-053 Mar 14 00:23:56.322583 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:23:56.312084 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:23:56.319592 systemd-resolved[215]: Defaulting to hostname 'linux'. Mar 14 00:23:56.321321 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:23:56.326894 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:23:56.422385 kernel: SCSI subsystem initialized Mar 14 00:23:56.434383 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:23:56.447750 kernel: iscsi: registered transport (tcp) Mar 14 00:23:56.473376 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:23:56.473461 kernel: QLogic iSCSI HBA Driver Mar 14 00:23:56.526748 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:23:56.532584 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:23:56.576508 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:23:56.576599 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:23:56.576625 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:23:56.621372 kernel: raid6: avx2x4 gen() 17931 MB/s Mar 14 00:23:56.638369 kernel: raid6: avx2x2 gen() 17948 MB/s Mar 14 00:23:56.655936 kernel: raid6: avx2x1 gen() 13841 MB/s Mar 14 00:23:56.655991 kernel: raid6: using algorithm avx2x2 gen() 17948 MB/s Mar 14 00:23:56.673901 kernel: raid6: .... xor() 18019 MB/s, rmw enabled Mar 14 00:23:56.673960 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:23:56.697385 kernel: xor: automatically using best checksumming function avx Mar 14 00:23:56.871379 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:23:56.885313 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:23:56.894641 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:23:56.924516 systemd-udevd[401]: Using default interface naming scheme 'v255'. Mar 14 00:23:56.931598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:23:56.942627 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:23:56.978386 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Mar 14 00:23:57.018083 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:23:57.035563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:23:57.119870 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:23:57.132650 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:23:57.174921 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:23:57.184582 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:23:57.186986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:23:57.199497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:23:57.210903 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:23:57.263061 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:23:57.263174 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:23:57.263212 kernel: blk-mq: reduced tag depth to 10240 Mar 14 00:23:57.266259 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:23:57.293382 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 14 00:23:57.334167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:23:57.343426 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:23:57.357405 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:23:57.357483 kernel: AES CTR mode by8 optimization enabled Mar 14 00:23:57.357970 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:23:57.362437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:23:57.362710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:23:57.364931 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:23:57.377798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:23:57.414905 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Mar 14 00:23:57.415255 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 14 00:23:57.418414 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 14 00:23:57.418763 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 14 00:23:57.421378 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:23:57.422647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:23:57.433032 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:23:57.433076 kernel: GPT:17805311 != 33554431 Mar 14 00:23:57.433102 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:23:57.433126 kernel: GPT:17805311 != 33554431 Mar 14 00:23:57.433150 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:23:57.433185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:57.436589 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:23:57.440591 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 14 00:23:57.507819 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:23:57.551508 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (471) Mar 14 00:23:57.551569 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (457) Mar 14 00:23:57.548778 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 14 00:23:57.574582 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 14 00:23:57.587176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 14 00:23:57.621104 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 14 00:23:57.645524 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 14 00:23:57.652598 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:23:57.689721 disk-uuid[552]: Primary Header is updated. Mar 14 00:23:57.689721 disk-uuid[552]: Secondary Entries is updated. Mar 14 00:23:57.689721 disk-uuid[552]: Secondary Header is updated. Mar 14 00:23:57.715460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:57.726372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:57.750397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:58.746598 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:23:58.747386 disk-uuid[553]: The operation has completed successfully. Mar 14 00:23:58.821190 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:23:58.821371 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:23:58.856571 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:23:58.881024 sh[570]: Success Mar 14 00:23:58.905402 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 14 00:23:59.001393 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:23:59.009515 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:23:59.032931 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:23:59.083787 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:23:59.083882 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:23:59.083924 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:23:59.093400 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:23:59.106010 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:23:59.128372 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:23:59.137084 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:23:59.153385 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:23:59.158589 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:23:59.177639 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:23:59.219382 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:23:59.233604 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:23:59.233693 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:23:59.252249 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:23:59.252382 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:23:59.269778 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:23:59.287583 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:23:59.288924 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:23:59.299644 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:23:59.342072 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:23:59.381641 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:23:59.496482 systemd-networkd[752]: lo: Link UP Mar 14 00:23:59.496494 systemd-networkd[752]: lo: Gained carrier Mar 14 00:23:59.503868 ignition[703]: Ignition 2.19.0 Mar 14 00:23:59.498973 systemd-networkd[752]: Enumeration completed Mar 14 00:23:59.503876 ignition[703]: Stage: fetch-offline Mar 14 00:23:59.499132 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:23:59.503960 ignition[703]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:23:59.500054 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:23:59.503973 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:23:59.500061 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:23:59.504102 ignition[703]: parsed url from cmdline: "" Mar 14 00:23:59.502496 systemd-networkd[752]: eth0: Link UP Mar 14 00:23:59.504107 ignition[703]: no config URL provided Mar 14 00:23:59.502503 systemd-networkd[752]: eth0: Gained carrier Mar 14 00:23:59.504113 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:23:59.502516 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:23:59.504126 ignition[703]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:23:59.510434 systemd-networkd[752]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:23:59.504134 ignition[703]: failed to fetch config: resource requires networking Mar 14 00:23:59.510451 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 14 00:23:59.504588 ignition[703]: Ignition finished successfully Mar 14 00:23:59.516034 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:23:59.597724 ignition[761]: Ignition 2.19.0 Mar 14 00:23:59.534413 systemd[1]: Reached target network.target - Network. Mar 14 00:23:59.597734 ignition[761]: Stage: fetch Mar 14 00:23:59.552567 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:23:59.597940 ignition[761]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:23:59.609420 unknown[761]: fetched base config from "system" Mar 14 00:23:59.597955 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:23:59.609433 unknown[761]: fetched base config from "system" Mar 14 00:23:59.598082 ignition[761]: parsed url from cmdline: "" Mar 14 00:23:59.609442 unknown[761]: fetched user config from "gcp" Mar 14 00:23:59.598090 ignition[761]: no config URL provided Mar 14 00:23:59.612048 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:23:59.598097 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:23:59.631590 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:23:59.598108 ignition[761]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:23:59.660866 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:23:59.598132 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 14 00:23:59.680574 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:23:59.602181 ignition[761]: GET result: OK Mar 14 00:23:59.723965 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:23:59.602284 ignition[761]: parsing config with SHA512: 2bfd635d4b5b537aca430f50aa739eb5466df5f2db6215a30aef6397c6fb7c9e0ec7a7fd395a2e0ee7b7c53216b488b1d3b887f70e216e16ea6e296969f352da Mar 14 00:23:59.763400 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:23:59.610095 ignition[761]: fetch: fetch complete Mar 14 00:23:59.786526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:23:59.610106 ignition[761]: fetch: fetch passed Mar 14 00:23:59.803542 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:23:59.610182 ignition[761]: Ignition finished successfully Mar 14 00:23:59.821539 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:23:59.658184 ignition[767]: Ignition 2.19.0 Mar 14 00:23:59.835536 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:23:59.658194 ignition[767]: Stage: kargs Mar 14 00:23:59.857602 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:23:59.658460 ignition[767]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:23:59.658479 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:23:59.659605 ignition[767]: kargs: kargs passed Mar 14 00:23:59.659664 ignition[767]: Ignition finished successfully Mar 14 00:23:59.721232 ignition[772]: Ignition 2.19.0 Mar 14 00:23:59.721242 ignition[772]: Stage: disks Mar 14 00:23:59.721527 ignition[772]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:23:59.721559 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:23:59.722718 ignition[772]: disks: disks passed Mar 14 00:23:59.722786 ignition[772]: Ignition finished successfully Mar 14 00:23:59.926707 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:24:00.115521 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:24:00.148541 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:24:00.270373 kernel: EXT4-fs (sda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:24:00.271151 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:24:00.272103 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:24:00.307522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:24:00.320519 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:24:00.397538 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Mar 14 00:24:00.397589 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:24:00.397617 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:24:00.397639 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:24:00.397661 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:24:00.397684 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:24:00.362939 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:24:00.363004 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:24:00.363041 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:24:00.394695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:24:00.423465 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:24:00.454587 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:24:00.591353 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:24:00.601848 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:24:00.611518 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:24:00.621531 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:24:00.766840 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:24:00.797521 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:24:00.822511 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:24:00.800726 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:24:00.841632 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:24:00.881895 ignition[901]: INFO : Ignition 2.19.0 Mar 14 00:24:00.882469 ignition[901]: INFO : Stage: mount Mar 14 00:24:00.882161 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:24:00.921522 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:24:00.921522 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:24:00.921522 ignition[901]: INFO : mount: mount passed Mar 14 00:24:00.921522 ignition[901]: INFO : Ignition finished successfully Mar 14 00:24:00.896956 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:24:00.914750 systemd-networkd[752]: eth0: Gained IPv6LL Mar 14 00:24:00.923494 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:24:01.280655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:24:01.314611 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Mar 14 00:24:01.314674 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:24:01.314700 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:24:01.328141 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:24:01.345069 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:24:01.345167 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:24:01.348845 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:24:01.389771 ignition[930]: INFO : Ignition 2.19.0 Mar 14 00:24:01.389771 ignition[930]: INFO : Stage: files Mar 14 00:24:01.404519 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:24:01.404519 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:24:01.404519 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:24:01.404519 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:24:01.404519 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:24:01.404519 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:24:01.404519 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:24:01.404519 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:24:01.404405 unknown[930]: wrote ssh authorized keys file for user: core Mar 14 00:24:01.506509 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:24:01.506509 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:24:01.548461 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:24:01.810893 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:24:01.810893 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:24:01.842520 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:24:02.240859 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:24:02.814546 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:24:02.814546 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:24:02.853541 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:24:02.853541 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:24:02.853541 ignition[930]: INFO : files: files passed Mar 14 00:24:02.853541 ignition[930]: INFO : Ignition finished successfully Mar 14 00:24:02.819749 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:24:02.840714 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:24:02.877695 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:24:02.895160 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:24:03.063519 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:24:03.063519 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:24:02.895292 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:24:03.112662 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:24:02.962730 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:24:02.977905 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:24:03.000578 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:24:03.080839 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:24:03.080967 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:24:03.103441 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:24:03.122541 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:24:03.143729 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:24:03.149586 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:24:03.227728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:24:03.256623 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:24:03.294454 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:24:03.306821 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:24:03.336766 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:24:03.337185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:24:03.337415 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:24:03.381821 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:24:03.398783 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:24:03.399228 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:24:03.432771 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:24:03.433187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:24:03.470722 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:24:03.471233 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:24:03.487995 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:24:03.525720 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:24:03.526151 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:24:03.542931 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:24:03.543137 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:24:03.583753 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:24:03.584130 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:24:03.622607 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:24:03.623001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:24:03.642767 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:24:03.642935 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:24:03.672936 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:24:03.673170 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:24:03.682956 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:24:03.683157 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:24:03.729624 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:24:03.740679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:24:03.778621 ignition[983]: INFO : Ignition 2.19.0 Mar 14 00:24:03.778621 ignition[983]: INFO : Stage: umount Mar 14 00:24:03.778621 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:24:03.778621 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 14 00:24:03.778621 ignition[983]: INFO : umount: umount passed Mar 14 00:24:03.778621 ignition[983]: INFO : Ignition finished successfully Mar 14 00:24:03.740953 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:24:03.774698 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:24:03.786500 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:24:03.786824 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:24:03.811037 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:24:03.811239 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:24:03.851367 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:24:03.852408 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:24:03.852529 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:24:03.867328 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:24:03.867491 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:24:03.890719 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:24:03.890856 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:24:03.909267 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:24:03.909376 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:24:03.928744 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:24:03.928847 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:24:03.948726 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:24:03.948805 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:24:03.968736 systemd[1]: Stopped target network.target - Network. Mar 14 00:24:03.993673 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:24:03.993785 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:24:04.001852 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:24:04.027638 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:24:04.032458 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:24:04.035747 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:24:04.053717 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:24:04.068765 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:24:04.068834 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:24:04.086795 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:24:04.086865 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:24:04.120605 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:24:04.120859 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:24:04.139793 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:24:04.139877 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:24:04.147797 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:24:04.147881 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:24:04.182003 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:24:04.187478 systemd-networkd[752]: eth0: DHCPv6 lease lost Mar 14 00:24:04.199911 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:24:04.231142 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:24:04.231290 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:24:04.250545 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:24:04.250839 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:24:04.260441 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:24:04.260512 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:24:04.303489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:24:04.321464 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:24:04.321693 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:24:04.340751 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:24:04.340858 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:24:04.359716 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:24:04.359819 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:24:04.369874 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:24:04.369947 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:24:04.386952 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:24:04.426087 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:24:04.426269 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:24:04.454110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:24:04.454184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:24:04.485796 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:24:04.485856 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:24:04.502712 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:24:04.839539 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 14 00:24:04.502814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:24:04.546691 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:24:04.546816 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:24:04.573811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:24:04.573898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:24:04.628592 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:24:04.650489 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:24:04.650706 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:24:04.671616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:24:04.671711 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:24:04.693191 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:24:04.693322 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:24:04.712937 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:24:04.713063 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:24:04.735935 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:24:04.761595 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:24:04.799196 systemd[1]: Switching root. Mar 14 00:24:05.006522 systemd-journald[184]: Journal stopped Mar 14 00:24:07.646670 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:24:07.646733 kernel: SELinux: policy capability open_perms=1 Mar 14 00:24:07.646757 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:24:07.646776 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:24:07.646795 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:24:07.646814 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:24:07.646834 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:24:07.646856 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:24:07.646874 kernel: audit: type=1403 audit(1773447845.459:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:24:07.646897 systemd[1]: Successfully loaded SELinux policy in 92.570ms. Mar 14 00:24:07.646919 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.187ms. Mar 14 00:24:07.646941 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:24:07.646961 systemd[1]: Detected virtualization google. Mar 14 00:24:07.646982 systemd[1]: Detected architecture x86-64. Mar 14 00:24:07.647004 systemd[1]: Detected first boot. Mar 14 00:24:07.647018 systemd[1]: Initializing machine ID from random generator. Mar 14 00:24:07.647032 zram_generator::config[1024]: No configuration found. Mar 14 00:24:07.647046 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:24:07.647059 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:24:07.647075 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:24:07.647088 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:24:07.647102 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:24:07.647115 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:24:07.647128 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:24:07.647144 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:24:07.647158 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:24:07.647174 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:24:07.647188 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:24:07.647201 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:24:07.647214 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:24:07.647228 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:24:07.647242 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:24:07.647255 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:24:07.647268 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:24:07.647285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:24:07.647298 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:24:07.647312 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:24:07.647325 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:24:07.647364 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:24:07.647392 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:24:07.647412 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:24:07.647426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:24:07.647439 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:24:07.647457 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:24:07.647470 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:24:07.647486 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:24:07.647499 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:24:07.647514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:24:07.647527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:24:07.647541 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:24:07.647559 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:24:07.647573 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:24:07.647586 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:24:07.647600 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:24:07.647614 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:24:07.647633 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:24:07.647648 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:24:07.647662 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:24:07.647677 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:24:07.647691 systemd[1]: Reached target machines.target - Containers. Mar 14 00:24:07.647705 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:24:07.647719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:24:07.647733 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:24:07.647749 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:24:07.647763 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:24:07.647777 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:24:07.647792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:24:07.647806 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:24:07.647820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:24:07.647834 kernel: ACPI: bus type drm_connector registered Mar 14 00:24:07.647847 kernel: fuse: init (API version 7.39) Mar 14 00:24:07.647863 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:24:07.647878 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:24:07.647892 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:24:07.647905 kernel: loop: module loaded Mar 14 00:24:07.647918 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:24:07.647932 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:24:07.647946 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:24:07.647960 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:24:07.647973 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:24:07.648022 systemd-journald[1111]: Collecting audit messages is disabled. Mar 14 00:24:07.648052 systemd-journald[1111]: Journal started Mar 14 00:24:07.648083 systemd-journald[1111]: Runtime Journal (/run/log/journal/706cb119774f40a09af0f815e2375b68) is 8.0M, max 148.7M, 140.7M free. Mar 14 00:24:06.380578 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:24:06.407612 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:24:06.408203 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:24:07.662979 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:24:07.688883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:24:07.705466 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:24:07.705567 systemd[1]: Stopped verity-setup.service. Mar 14 00:24:07.735597 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:24:07.746393 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:24:07.756996 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:24:07.767789 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:24:07.777751 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:24:07.787724 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:24:07.797738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:24:07.807699 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:24:07.817939 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:24:07.829949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:24:07.841922 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:24:07.842178 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:24:07.853903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:24:07.854161 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:24:07.865914 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:24:07.866161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:24:07.876940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:24:07.877187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:24:07.888917 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:24:07.889170 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:24:07.900995 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:24:07.901265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:24:07.912934 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:24:07.922903 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:24:07.934920 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:24:07.946969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:24:07.971903 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:24:07.986493 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:24:08.011448 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:24:08.021569 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:24:08.021829 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:24:08.033056 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:24:08.049615 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:24:08.066984 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:24:08.076749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:24:08.079448 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:24:08.094181 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:24:08.105581 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:24:08.117107 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:24:08.125597 systemd-journald[1111]: Time spent on flushing to /var/log/journal/706cb119774f40a09af0f815e2375b68 is 128.446ms for 926 entries. Mar 14 00:24:08.125597 systemd-journald[1111]: System Journal (/var/log/journal/706cb119774f40a09af0f815e2375b68) is 8.0M, max 584.8M, 576.8M free. Mar 14 00:24:08.301608 systemd-journald[1111]: Received client request to flush runtime journal. Mar 14 00:24:08.301701 kernel: loop0: detected capacity change from 0 to 142488 Mar 14 00:24:08.133904 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:24:08.142058 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:24:08.161713 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:24:08.178681 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:24:08.202557 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:24:08.218595 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:24:08.230731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:24:08.242953 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:24:08.255109 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:24:08.281479 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:24:08.301348 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:24:08.314480 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:24:08.326420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:24:08.348182 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:24:08.353502 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:24:08.366545 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:24:08.384365 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:24:08.386148 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:24:08.407610 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:24:08.420391 kernel: loop1: detected capacity change from 0 to 140768 Mar 14 00:24:08.496153 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Mar 14 00:24:08.496190 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Mar 14 00:24:08.512287 kernel: loop2: detected capacity change from 0 to 217752 Mar 14 00:24:08.528443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:24:08.666554 kernel: loop3: detected capacity change from 0 to 54824 Mar 14 00:24:08.750447 kernel: loop4: detected capacity change from 0 to 142488 Mar 14 00:24:08.808665 kernel: loop5: detected capacity change from 0 to 140768 Mar 14 00:24:08.882388 kernel: loop6: detected capacity change from 0 to 217752 Mar 14 00:24:08.933158 kernel: loop7: detected capacity change from 0 to 54824 Mar 14 00:24:08.956894 (sd-merge)[1167]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Mar 14 00:24:08.962603 (sd-merge)[1167]: Merged extensions into '/usr'. Mar 14 00:24:08.980535 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:24:08.981600 systemd[1]: Reloading... Mar 14 00:24:09.150361 zram_generator::config[1197]: No configuration found. Mar 14 00:24:09.367366 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:24:09.413776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:24:09.510918 systemd[1]: Reloading finished in 528 ms. Mar 14 00:24:09.542737 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:24:09.553080 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:24:09.564959 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:24:09.589637 systemd[1]: Starting ensure-sysext.service... Mar 14 00:24:09.601616 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:24:09.633600 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:24:09.650059 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:24:09.650086 systemd[1]: Reloading... Mar 14 00:24:09.654412 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:24:09.655465 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:24:09.657278 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:24:09.660574 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Mar 14 00:24:09.660838 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Mar 14 00:24:09.672267 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:24:09.674389 systemd-tmpfiles[1235]: Skipping /boot Mar 14 00:24:09.701299 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:24:09.702598 systemd-tmpfiles[1235]: Skipping /boot Mar 14 00:24:09.745327 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Mar 14 00:24:09.805424 zram_generator::config[1265]: No configuration found. Mar 14 00:24:10.109393 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 14 00:24:10.118386 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:24:10.128146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:24:10.129420 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:24:10.145443 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 14 00:24:10.150616 kernel: ACPI: button: Sleep Button [SLPF] Mar 14 00:24:10.193367 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1303) Mar 14 00:24:10.241395 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 14 00:24:10.318978 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:24:10.319828 systemd[1]: Reloading finished in 668 ms. Mar 14 00:24:10.325398 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:24:10.353002 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:24:10.371103 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:24:10.401392 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:24:10.433938 systemd[1]: Finished ensure-sysext.service. Mar 14 00:24:10.442956 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:24:10.475034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 14 00:24:10.488117 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:24:10.498582 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:24:10.522412 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:24:10.531649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:24:10.537758 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:24:10.552610 augenrules[1355]: No rules Mar 14 00:24:10.554626 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:24:10.575311 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:24:10.580119 lvm[1352]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:24:10.591618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:24:10.610073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:24:10.631709 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:24:10.640701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:24:10.645616 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:24:10.665828 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:24:10.688667 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:24:10.708657 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:24:10.718496 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:24:10.734612 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:24:10.758640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:24:10.768508 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:24:10.779959 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:24:10.791098 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:24:10.804049 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:24:10.804737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:24:10.804928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:24:10.805289 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:24:10.805976 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:24:10.806457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:24:10.806679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:24:10.807161 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:24:10.807397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:24:10.812953 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:24:10.813570 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:24:10.826192 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:24:10.834906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:24:10.842597 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:24:10.845153 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Mar 14 00:24:10.845246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:24:10.845385 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:24:10.854660 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:24:10.857163 lvm[1387]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:24:10.864643 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:24:10.864735 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:24:10.866273 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:24:10.890736 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:24:10.934701 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:24:10.937634 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Mar 14 00:24:10.958082 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:24:10.973696 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:24:11.076391 systemd-networkd[1368]: lo: Link UP Mar 14 00:24:11.076909 systemd-networkd[1368]: lo: Gained carrier Mar 14 00:24:11.079655 systemd-networkd[1368]: Enumeration completed Mar 14 00:24:11.080087 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:24:11.081351 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:24:11.081497 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:24:11.082480 systemd-resolved[1369]: Positive Trust Anchors: Mar 14 00:24:11.082509 systemd-resolved[1369]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:24:11.082571 systemd-resolved[1369]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:24:11.083235 systemd-networkd[1368]: eth0: Link UP Mar 14 00:24:11.083243 systemd-networkd[1368]: eth0: Gained carrier Mar 14 00:24:11.083282 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:24:11.089693 systemd-resolved[1369]: Defaulting to hostname 'linux'. Mar 14 00:24:11.093435 systemd-networkd[1368]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:24:11.093461 systemd-networkd[1368]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 14 00:24:11.096588 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:24:11.107668 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:24:11.118745 systemd[1]: Reached target network.target - Network. Mar 14 00:24:11.127512 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:24:11.138529 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:24:11.148655 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:24:11.160605 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:24:11.171803 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:24:11.181709 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:24:11.193569 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:24:11.204531 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:24:11.204590 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:24:11.213549 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:24:11.224274 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:24:11.236389 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:24:11.248267 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:24:11.260422 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:24:11.270669 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:24:11.280525 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:24:11.289570 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:24:11.289624 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:24:11.299519 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:24:11.311402 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:24:11.332805 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:24:11.356748 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:24:11.378740 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:24:11.388498 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:24:11.395938 jq[1420]: false Mar 14 00:24:11.398631 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:24:11.423222 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:24:11.429658 coreos-metadata[1418]: Mar 14 00:24:11.427 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Mar 14 00:24:11.433813 coreos-metadata[1418]: Mar 14 00:24:11.433 INFO Fetch successful Mar 14 00:24:11.433813 coreos-metadata[1418]: Mar 14 00:24:11.433 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Mar 14 00:24:11.438893 coreos-metadata[1418]: Mar 14 00:24:11.438 INFO Fetch successful Mar 14 00:24:11.438893 coreos-metadata[1418]: Mar 14 00:24:11.438 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Mar 14 00:24:11.439392 coreos-metadata[1418]: Mar 14 00:24:11.439 INFO Fetch successful Mar 14 00:24:11.439392 coreos-metadata[1418]: Mar 14 00:24:11.439 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Mar 14 00:24:11.440522 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:24:11.443737 coreos-metadata[1418]: Mar 14 00:24:11.441 INFO Fetch successful Mar 14 00:24:11.462163 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:24:11.468961 extend-filesystems[1421]: Found loop4 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found loop5 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found loop6 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found loop7 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found sda Mar 14 00:24:11.485578 extend-filesystems[1421]: Found sda1 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found sda2 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found sda3 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found usr Mar 14 00:24:11.485578 extend-filesystems[1421]: Found sda4 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found sda6 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found sda7 Mar 14 00:24:11.485578 extend-filesystems[1421]: Found sda9 Mar 14 00:24:11.485578 extend-filesystems[1421]: Checking size of /dev/sda9 Mar 14 00:24:11.723511 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Mar 14 00:24:11.723594 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1314) Mar 14 00:24:11.723637 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: ---------------------------------------------------- Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: corporation. Support and training for ntp-4 are Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: available at https://www.nwtime.org/support Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: ---------------------------------------------------- Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: proto: precision = 0.076 usec (-24) Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: basedate set to 2026-03-01 Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: gps base set to 2026-03-01 (week 2408) Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: Listen normally on 3 eth0 10.128.0.67:123 Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: Listen normally on 4 lo [::1]:123 Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: bind(21) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:43%2#123 Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: failed to init interface for address fe80::4001:aff:fe80:43%2 Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: Listening on routing socket on fd #21 for interface updates Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:24:11.723753 ntpd[1425]: 14 Mar 00:24:11 ntpd[1425]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:24:11.514170 ntpd[1425]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:24:11.728408 extend-filesystems[1421]: Resized partition /dev/sda9 Mar 14 00:24:11.487301 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:24:11.514251 ntpd[1425]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:24:11.738199 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:24:11.738199 extend-filesystems[1443]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 14 00:24:11.738199 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:24:11.738199 extend-filesystems[1443]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Mar 14 00:24:11.510737 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:24:11.514276 ntpd[1425]: ---------------------------------------------------- Mar 14 00:24:11.779956 extend-filesystems[1421]: Resized filesystem in /dev/sda9 Mar 14 00:24:11.555135 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 14 00:24:11.514292 ntpd[1425]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:24:11.789111 update_engine[1446]: I20260314 00:24:11.669581 1446 main.cc:92] Flatcar Update Engine starting Mar 14 00:24:11.789111 update_engine[1446]: I20260314 00:24:11.678923 1446 update_check_scheduler.cc:74] Next update check in 7m56s Mar 14 00:24:11.556070 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:24:11.514306 ntpd[1425]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:24:11.564132 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:24:11.514322 ntpd[1425]: corporation. Support and training for ntp-4 are Mar 14 00:24:11.793137 jq[1448]: true Mar 14 00:24:11.608492 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:24:11.514357 ntpd[1425]: available at https://www.nwtime.org/support Mar 14 00:24:11.624697 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:24:11.514372 ntpd[1425]: ---------------------------------------------------- Mar 14 00:24:11.653053 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:24:11.518680 ntpd[1425]: proto: precision = 0.076 usec (-24) Mar 14 00:24:11.654661 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:24:11.521626 ntpd[1425]: basedate set to 2026-03-01 Mar 14 00:24:11.655235 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:24:11.521651 ntpd[1425]: gps base set to 2026-03-01 (week 2408) Mar 14 00:24:11.655562 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:24:11.529643 ntpd[1425]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:24:11.673258 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:24:11.529707 ntpd[1425]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:24:11.674400 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:24:11.529972 ntpd[1425]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:24:11.697139 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:24:11.530033 ntpd[1425]: Listen normally on 3 eth0 10.128.0.67:123 Mar 14 00:24:11.697463 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:24:11.530099 ntpd[1425]: Listen normally on 4 lo [::1]:123 Mar 14 00:24:11.530166 ntpd[1425]: bind(21) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:24:11.530194 ntpd[1425]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:43%2#123 Mar 14 00:24:11.530215 ntpd[1425]: failed to init interface for address fe80::4001:aff:fe80:43%2 Mar 14 00:24:11.530267 ntpd[1425]: Listening on routing socket on fd #21 for interface updates Mar 14 00:24:11.534225 ntpd[1425]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:24:11.534272 ntpd[1425]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:24:11.537137 dbus-daemon[1419]: [system] SELinux support is enabled Mar 14 00:24:11.540906 dbus-daemon[1419]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1368 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:24:11.805179 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:24:11.774592 dbus-daemon[1419]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:24:11.805618 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:24:11.816659 jq[1456]: true Mar 14 00:24:11.842939 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:24:11.866613 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:24:11.867509 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:24:11.867543 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:24:11.883401 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:24:11.885434 systemd-logind[1439]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 14 00:24:11.885475 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:24:11.889561 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:24:11.890116 systemd-logind[1439]: New seat seat0. Mar 14 00:24:11.900520 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:24:11.900574 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:24:11.920614 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:24:11.930954 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:24:11.944961 tar[1455]: linux-amd64/LICENSE Mar 14 00:24:11.946567 tar[1455]: linux-amd64/helm Mar 14 00:24:11.997200 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:24:12.007178 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:24:12.030058 systemd[1]: Starting sshkeys.service... Mar 14 00:24:12.087846 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:24:12.108963 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:24:12.185700 dbus-daemon[1419]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:24:12.186179 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:24:12.187549 dbus-daemon[1419]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1479 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:24:12.210872 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:24:12.309460 coreos-metadata[1491]: Mar 14 00:24:12.309 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 14 00:24:12.329501 coreos-metadata[1491]: Mar 14 00:24:12.329 INFO Fetch failed with 404: resource not found Mar 14 00:24:12.329501 coreos-metadata[1491]: Mar 14 00:24:12.329 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 14 00:24:12.330145 coreos-metadata[1491]: Mar 14 00:24:12.330 INFO Fetch successful Mar 14 00:24:12.330366 coreos-metadata[1491]: Mar 14 00:24:12.330 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 14 00:24:12.335867 coreos-metadata[1491]: Mar 14 00:24:12.335 INFO Fetch failed with 404: resource not found Mar 14 00:24:12.335867 coreos-metadata[1491]: Mar 14 00:24:12.335 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 14 00:24:12.335867 coreos-metadata[1491]: Mar 14 00:24:12.335 INFO Fetch failed with 404: resource not found Mar 14 00:24:12.335867 coreos-metadata[1491]: Mar 14 00:24:12.335 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 14 00:24:12.335867 coreos-metadata[1491]: Mar 14 00:24:12.335 INFO Fetch successful Mar 14 00:24:12.337613 polkitd[1494]: Started polkitd version 121 Mar 14 00:24:12.351117 unknown[1491]: wrote ssh authorized keys file for user: core Mar 14 00:24:12.373826 polkitd[1494]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:24:12.375503 polkitd[1494]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:24:12.378804 polkitd[1494]: Finished loading, compiling and executing 2 rules Mar 14 00:24:12.386997 dbus-daemon[1419]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:24:12.387484 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:24:12.392101 polkitd[1494]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:24:12.430036 update-ssh-keys[1507]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:24:12.429897 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:24:12.449202 systemd[1]: Finished sshkeys.service. Mar 14 00:24:12.471173 systemd-resolved[1369]: System hostname changed to 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da'. Mar 14 00:24:12.473375 systemd-hostnamed[1479]: Hostname set to (transient) Mar 14 00:24:12.516309 ntpd[1425]: bind(24) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:24:12.516983 ntpd[1425]: 14 Mar 00:24:12 ntpd[1425]: bind(24) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:24:12.516983 ntpd[1425]: 14 Mar 00:24:12 ntpd[1425]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:43%2#123 Mar 14 00:24:12.516983 ntpd[1425]: 14 Mar 00:24:12 ntpd[1425]: failed to init interface for address fe80::4001:aff:fe80:43%2 Mar 14 00:24:12.516391 ntpd[1425]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:43%2#123 Mar 14 00:24:12.516414 ntpd[1425]: failed to init interface for address fe80::4001:aff:fe80:43%2 Mar 14 00:24:12.580615 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:24:12.615876 containerd[1462]: time="2026-03-14T00:24:12.614971232Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:24:12.690681 systemd-networkd[1368]: eth0: Gained IPv6LL Mar 14 00:24:12.697247 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:24:12.709325 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:24:12.721318 containerd[1462]: time="2026-03-14T00:24:12.721004983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:24:12.727793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:12.729939 containerd[1462]: time="2026-03-14T00:24:12.728964280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:24:12.729939 containerd[1462]: time="2026-03-14T00:24:12.729022351Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:24:12.729939 containerd[1462]: time="2026-03-14T00:24:12.729050997Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:24:12.729939 containerd[1462]: time="2026-03-14T00:24:12.729246254Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:24:12.729939 containerd[1462]: time="2026-03-14T00:24:12.729293723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:24:12.729939 containerd[1462]: time="2026-03-14T00:24:12.729570767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:24:12.729939 containerd[1462]: time="2026-03-14T00:24:12.729596463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:24:12.733916 containerd[1462]: time="2026-03-14T00:24:12.732499722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:24:12.733916 containerd[1462]: time="2026-03-14T00:24:12.732538768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:24:12.733916 containerd[1462]: time="2026-03-14T00:24:12.732563739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:24:12.733916 containerd[1462]: time="2026-03-14T00:24:12.732583803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:24:12.733916 containerd[1462]: time="2026-03-14T00:24:12.732715640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:24:12.733916 containerd[1462]: time="2026-03-14T00:24:12.733055620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:24:12.733916 containerd[1462]: time="2026-03-14T00:24:12.733275484Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:24:12.733916 containerd[1462]: time="2026-03-14T00:24:12.733302894Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:24:12.733916 containerd[1462]: time="2026-03-14T00:24:12.733456538Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:24:12.734627 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:24:12.735026 containerd[1462]: time="2026-03-14T00:24:12.734856246Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:24:12.744814 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:24:12.748554 containerd[1462]: time="2026-03-14T00:24:12.746109153Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:24:12.748554 containerd[1462]: time="2026-03-14T00:24:12.746190711Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:24:12.748554 containerd[1462]: time="2026-03-14T00:24:12.746219546Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:24:12.748554 containerd[1462]: time="2026-03-14T00:24:12.746246930Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:24:12.748554 containerd[1462]: time="2026-03-14T00:24:12.746279026Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:24:12.748554 containerd[1462]: time="2026-03-14T00:24:12.747606943Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:24:12.749771 containerd[1462]: time="2026-03-14T00:24:12.749728025Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:24:12.749980 containerd[1462]: time="2026-03-14T00:24:12.749950055Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:24:12.750052 containerd[1462]: time="2026-03-14T00:24:12.749989133Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:24:12.750052 containerd[1462]: time="2026-03-14T00:24:12.750011383Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:24:12.750052 containerd[1462]: time="2026-03-14T00:24:12.750038598Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:24:12.750213 containerd[1462]: time="2026-03-14T00:24:12.750064445Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:24:12.750213 containerd[1462]: time="2026-03-14T00:24:12.750086617Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:24:12.750213 containerd[1462]: time="2026-03-14T00:24:12.750113342Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:24:12.750213 containerd[1462]: time="2026-03-14T00:24:12.750137748Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:24:12.750213 containerd[1462]: time="2026-03-14T00:24:12.750160653Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:24:12.750213 containerd[1462]: time="2026-03-14T00:24:12.750182270Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:24:12.750213 containerd[1462]: time="2026-03-14T00:24:12.750205020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750238384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750263330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750284592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750308014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750330299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750377270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750400010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750438847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750462682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750488793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.750772 containerd[1462]: time="2026-03-14T00:24:12.750508569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.753764 containerd[1462]: time="2026-03-14T00:24:12.750528855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.753764 containerd[1462]: time="2026-03-14T00:24:12.751401816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.753764 containerd[1462]: time="2026-03-14T00:24:12.751433367Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:24:12.753764 containerd[1462]: time="2026-03-14T00:24:12.751482306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.753764 containerd[1462]: time="2026-03-14T00:24:12.751505409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.753764 containerd[1462]: time="2026-03-14T00:24:12.751526263Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:24:12.753764 containerd[1462]: time="2026-03-14T00:24:12.751618913Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:24:12.755370 containerd[1462]: time="2026-03-14T00:24:12.751650951Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:24:12.755370 containerd[1462]: time="2026-03-14T00:24:12.755061046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:24:12.755370 containerd[1462]: time="2026-03-14T00:24:12.755097315Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:24:12.755370 containerd[1462]: time="2026-03-14T00:24:12.755116509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.755370 containerd[1462]: time="2026-03-14T00:24:12.755137858Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:24:12.755370 containerd[1462]: time="2026-03-14T00:24:12.755162118Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:24:12.755370 containerd[1462]: time="2026-03-14T00:24:12.755180044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:24:12.757971 containerd[1462]: time="2026-03-14T00:24:12.755694155Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:24:12.757971 containerd[1462]: time="2026-03-14T00:24:12.755803625Z" level=info msg="Connect containerd service" Mar 14 00:24:12.757971 containerd[1462]: time="2026-03-14T00:24:12.755868483Z" level=info msg="using legacy CRI server" Mar 14 00:24:12.757971 containerd[1462]: time="2026-03-14T00:24:12.755883830Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:24:12.757971 containerd[1462]: time="2026-03-14T00:24:12.756129573Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.758093740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.758253619Z" level=info msg="Start subscribing containerd event" Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.758327552Z" level=info msg="Start recovering state" Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.758451072Z" level=info msg="Start event monitor" Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.758473574Z" level=info msg="Start snapshots syncer" Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.758489571Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.758501830Z" level=info msg="Start streaming server" Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.759954349Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.760023988Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:24:12.760566 containerd[1462]: time="2026-03-14T00:24:12.760100754Z" level=info msg="containerd successfully booted in 0.147578s" Mar 14 00:24:12.760754 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Mar 14 00:24:12.770294 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:24:12.807841 init.sh[1525]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 14 00:24:12.807841 init.sh[1525]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 14 00:24:12.811737 init.sh[1525]: + /usr/bin/google_instance_setup Mar 14 00:24:12.836541 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:24:12.856072 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:24:12.876819 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:24:12.919073 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:24:12.919603 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:24:12.941090 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:24:12.995935 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:24:13.016874 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:24:13.033887 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:24:13.044105 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:24:13.129684 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:24:13.152791 systemd[1]: Started sshd@0-10.128.0.67:22-4.153.228.146:41844.service - OpenSSH per-connection server daemon (4.153.228.146:41844). Mar 14 00:24:13.445919 tar[1455]: linux-amd64/README.md Mar 14 00:24:13.470574 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:24:13.509081 sshd[1548]: Accepted publickey for core from 4.153.228.146 port 41844 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:24:13.512503 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:13.533786 systemd-logind[1439]: New session 1 of user core. Mar 14 00:24:13.538474 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:24:13.558787 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:24:13.607185 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:24:13.629907 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:24:13.645999 instance-setup[1531]: INFO Running google_set_multiqueue. Mar 14 00:24:13.665878 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:24:13.677600 instance-setup[1531]: INFO Set channels for eth0 to 2. Mar 14 00:24:13.684537 instance-setup[1531]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 14 00:24:13.689522 instance-setup[1531]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 14 00:24:13.689848 instance-setup[1531]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 14 00:24:13.693323 instance-setup[1531]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 14 00:24:13.693733 instance-setup[1531]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 14 00:24:13.695427 instance-setup[1531]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 14 00:24:13.695781 instance-setup[1531]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 14 00:24:13.697470 instance-setup[1531]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 14 00:24:13.707451 instance-setup[1531]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 14 00:24:13.718521 instance-setup[1531]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 14 00:24:13.719557 instance-setup[1531]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 14 00:24:13.719604 instance-setup[1531]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 14 00:24:13.754591 init.sh[1525]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 14 00:24:13.920771 systemd[1559]: Queued start job for default target default.target. Mar 14 00:24:13.927018 systemd[1559]: Created slice app.slice - User Application Slice. Mar 14 00:24:13.927062 systemd[1559]: Reached target paths.target - Paths. Mar 14 00:24:13.927099 systemd[1559]: Reached target timers.target - Timers. Mar 14 00:24:13.932412 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:24:13.963950 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:24:13.964157 systemd[1559]: Reached target sockets.target - Sockets. Mar 14 00:24:13.964184 systemd[1559]: Reached target basic.target - Basic System. Mar 14 00:24:13.964256 systemd[1559]: Reached target default.target - Main User Target. Mar 14 00:24:13.964313 systemd[1559]: Startup finished in 282ms. Mar 14 00:24:13.964625 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:24:13.981628 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:24:13.989012 startup-script[1591]: INFO Starting startup scripts. Mar 14 00:24:13.996727 startup-script[1591]: INFO No startup scripts found in metadata. Mar 14 00:24:13.996820 startup-script[1591]: INFO Finished running startup scripts. Mar 14 00:24:14.025501 init.sh[1525]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 14 00:24:14.025501 init.sh[1525]: + daemon_pids=() Mar 14 00:24:14.025501 init.sh[1525]: + for d in accounts clock_skew network Mar 14 00:24:14.025501 init.sh[1525]: + daemon_pids+=($!) Mar 14 00:24:14.025501 init.sh[1525]: + for d in accounts clock_skew network Mar 14 00:24:14.025501 init.sh[1525]: + daemon_pids+=($!) Mar 14 00:24:14.025501 init.sh[1525]: + for d in accounts clock_skew network Mar 14 00:24:14.025501 init.sh[1525]: + daemon_pids+=($!) Mar 14 00:24:14.025501 init.sh[1525]: + NOTIFY_SOCKET=/run/systemd/notify Mar 14 00:24:14.025501 init.sh[1525]: + /usr/bin/systemd-notify --ready Mar 14 00:24:14.026426 init.sh[1597]: + /usr/bin/google_accounts_daemon Mar 14 00:24:14.026750 init.sh[1598]: + /usr/bin/google_clock_skew_daemon Mar 14 00:24:14.027026 init.sh[1599]: + /usr/bin/google_network_daemon Mar 14 00:24:14.051000 systemd[1]: Started oem-gce.service - GCE Linux Agent. Mar 14 00:24:14.061967 init.sh[1525]: + wait -n 1597 1598 1599 Mar 14 00:24:14.224818 systemd[1]: Started sshd@1-10.128.0.67:22-4.153.228.146:41856.service - OpenSSH per-connection server daemon (4.153.228.146:41856). Mar 14 00:24:14.466590 google-clock-skew[1598]: INFO Starting Google Clock Skew daemon. Mar 14 00:24:14.474972 google-clock-skew[1598]: INFO Clock drift token has changed: 0. Mar 14 00:24:14.532965 google-networking[1599]: INFO Starting Google Networking daemon. Mar 14 00:24:14.573201 sshd[1603]: Accepted publickey for core from 4.153.228.146 port 41856 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:24:14.574195 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:14.586418 systemd-logind[1439]: New session 2 of user core. Mar 14 00:24:14.591883 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:24:14.633724 groupadd[1613]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 14 00:24:14.638146 groupadd[1613]: group added to /etc/gshadow: name=google-sudoers Mar 14 00:24:14.699795 groupadd[1613]: new group: name=google-sudoers, GID=1000 Mar 14 00:24:14.738592 google-accounts[1597]: INFO Starting Google Accounts daemon. Mar 14 00:24:14.752358 google-accounts[1597]: WARNING OS Login not installed. Mar 14 00:24:14.754061 google-accounts[1597]: INFO Creating a new user account for 0. Mar 14 00:24:14.758759 init.sh[1623]: useradd: invalid user name '0': use --badname to ignore Mar 14 00:24:14.759126 google-accounts[1597]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 14 00:24:14.772621 sshd[1603]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:14.778949 systemd[1]: sshd@1-10.128.0.67:22-4.153.228.146:41856.service: Deactivated successfully. Mar 14 00:24:14.782597 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:24:14.785133 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:24:14.787261 systemd-logind[1439]: Removed session 2. Mar 14 00:24:14.829817 systemd[1]: Started sshd@2-10.128.0.67:22-4.153.228.146:41872.service - OpenSSH per-connection server daemon (4.153.228.146:41872). Mar 14 00:24:15.029606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:15.030137 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:24:15.041494 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:24:15.053652 systemd[1]: Startup finished in 1.045s (kernel) + 9.683s (initrd) + 9.675s (userspace) = 20.404s. Mar 14 00:24:15.080238 sshd[1628]: Accepted publickey for core from 4.153.228.146 port 41872 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:24:15.084566 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:15.093420 systemd-logind[1439]: New session 3 of user core. Mar 14 00:24:15.100624 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:24:15.268778 sshd[1628]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:15.277304 systemd[1]: sshd@2-10.128.0.67:22-4.153.228.146:41872.service: Deactivated successfully. Mar 14 00:24:15.280108 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:24:15.281210 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:24:15.284145 systemd-logind[1439]: Removed session 3. Mar 14 00:24:15.000859 systemd-resolved[1369]: Clock change detected. Flushing caches. Mar 14 00:24:15.017233 systemd-journald[1111]: Time jumped backwards, rotating. Mar 14 00:24:15.001508 google-clock-skew[1598]: INFO Synced system time with hardware clock. Mar 14 00:24:15.189345 ntpd[1425]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:43%2]:123 Mar 14 00:24:15.190022 ntpd[1425]: 14 Mar 00:24:15 ntpd[1425]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:43%2]:123 Mar 14 00:24:15.495562 kubelet[1635]: E0314 00:24:15.495487 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:24:15.498607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:24:15.498923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:24:15.499414 systemd[1]: kubelet.service: Consumed 1.212s CPU time. Mar 14 00:24:24.987424 systemd[1]: Started sshd@3-10.128.0.67:22-4.153.228.146:54476.service - OpenSSH per-connection server daemon (4.153.228.146:54476). Mar 14 00:24:25.225276 sshd[1652]: Accepted publickey for core from 4.153.228.146 port 54476 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:24:25.227207 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:25.233270 systemd-logind[1439]: New session 4 of user core. Mar 14 00:24:25.239954 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:24:25.409259 sshd[1652]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:25.414758 systemd[1]: sshd@3-10.128.0.67:22-4.153.228.146:54476.service: Deactivated successfully. Mar 14 00:24:25.417218 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:24:25.418256 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:24:25.419892 systemd-logind[1439]: Removed session 4. Mar 14 00:24:25.458119 systemd[1]: Started sshd@4-10.128.0.67:22-4.153.228.146:54488.service - OpenSSH per-connection server daemon (4.153.228.146:54488). Mar 14 00:24:25.658977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:24:25.671028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:25.708286 sshd[1659]: Accepted publickey for core from 4.153.228.146 port 54488 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:24:25.711347 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:25.718687 systemd-logind[1439]: New session 5 of user core. Mar 14 00:24:25.730504 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:24:25.889003 sshd[1659]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:25.895922 systemd[1]: sshd@4-10.128.0.67:22-4.153.228.146:54488.service: Deactivated successfully. Mar 14 00:24:25.898395 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:24:25.899401 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:24:25.901569 systemd-logind[1439]: Removed session 5. Mar 14 00:24:25.943182 systemd[1]: Started sshd@5-10.128.0.67:22-4.153.228.146:54502.service - OpenSSH per-connection server daemon (4.153.228.146:54502). Mar 14 00:24:26.041964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:26.050327 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:24:26.105321 kubelet[1676]: E0314 00:24:26.105252 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:24:26.109769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:24:26.110029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:24:26.232806 sshd[1669]: Accepted publickey for core from 4.153.228.146 port 54502 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:24:26.233797 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:26.240285 systemd-logind[1439]: New session 6 of user core. Mar 14 00:24:26.248954 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:24:26.439094 sshd[1669]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:26.444237 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:24:26.445206 systemd[1]: sshd@5-10.128.0.67:22-4.153.228.146:54502.service: Deactivated successfully. Mar 14 00:24:26.447741 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:24:26.449233 systemd-logind[1439]: Removed session 6. Mar 14 00:24:26.488110 systemd[1]: Started sshd@6-10.128.0.67:22-4.153.228.146:54508.service - OpenSSH per-connection server daemon (4.153.228.146:54508). Mar 14 00:24:26.726076 sshd[1689]: Accepted publickey for core from 4.153.228.146 port 54508 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:24:26.728003 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:26.734412 systemd-logind[1439]: New session 7 of user core. Mar 14 00:24:26.739964 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:24:26.900788 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:24:26.901332 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:24:26.915542 sudo[1692]: pam_unix(sudo:session): session closed for user root Mar 14 00:24:26.951947 sshd[1689]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:26.956597 systemd[1]: sshd@6-10.128.0.67:22-4.153.228.146:54508.service: Deactivated successfully. Mar 14 00:24:26.959367 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:24:26.961291 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:24:26.963063 systemd-logind[1439]: Removed session 7. Mar 14 00:24:27.000093 systemd[1]: Started sshd@7-10.128.0.67:22-4.153.228.146:54510.service - OpenSSH per-connection server daemon (4.153.228.146:54510). Mar 14 00:24:27.254883 sshd[1697]: Accepted publickey for core from 4.153.228.146 port 54510 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:24:27.257020 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:27.263627 systemd-logind[1439]: New session 8 of user core. Mar 14 00:24:27.265988 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:24:27.415131 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:24:27.415671 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:24:27.420975 sudo[1701]: pam_unix(sudo:session): session closed for user root Mar 14 00:24:27.435090 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:24:27.435601 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:24:27.456142 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:24:27.458893 auditctl[1704]: No rules Mar 14 00:24:27.459427 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:24:27.459718 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:24:27.463358 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:24:27.508233 augenrules[1722]: No rules Mar 14 00:24:27.509272 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:24:27.510669 sudo[1700]: pam_unix(sudo:session): session closed for user root Mar 14 00:24:27.548347 sshd[1697]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:27.552927 systemd[1]: sshd@7-10.128.0.67:22-4.153.228.146:54510.service: Deactivated successfully. Mar 14 00:24:27.555363 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:24:27.557115 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:24:27.559123 systemd-logind[1439]: Removed session 8. Mar 14 00:24:27.597161 systemd[1]: Started sshd@8-10.128.0.67:22-4.153.228.146:54514.service - OpenSSH per-connection server daemon (4.153.228.146:54514). Mar 14 00:24:27.832127 sshd[1730]: Accepted publickey for core from 4.153.228.146 port 54514 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:24:27.834140 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:27.841126 systemd-logind[1439]: New session 9 of user core. Mar 14 00:24:27.847973 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:24:27.988681 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:24:27.989221 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:24:28.457138 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:24:28.461030 (dockerd)[1750]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:24:28.925030 dockerd[1750]: time="2026-03-14T00:24:28.924490185Z" level=info msg="Starting up" Mar 14 00:24:29.046814 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3773676764-merged.mount: Deactivated successfully. Mar 14 00:24:29.057097 systemd[1]: var-lib-docker-metacopy\x2dcheck3146179077-merged.mount: Deactivated successfully. Mar 14 00:24:29.078207 dockerd[1750]: time="2026-03-14T00:24:29.078145495Z" level=info msg="Loading containers: start." Mar 14 00:24:29.232742 kernel: Initializing XFRM netlink socket Mar 14 00:24:29.337174 systemd-networkd[1368]: docker0: Link UP Mar 14 00:24:29.358960 dockerd[1750]: time="2026-03-14T00:24:29.358899065Z" level=info msg="Loading containers: done." Mar 14 00:24:29.384616 dockerd[1750]: time="2026-03-14T00:24:29.384560282Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:24:29.384829 dockerd[1750]: time="2026-03-14T00:24:29.384731693Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:24:29.384964 dockerd[1750]: time="2026-03-14T00:24:29.384898007Z" level=info msg="Daemon has completed initialization" Mar 14 00:24:29.424622 dockerd[1750]: time="2026-03-14T00:24:29.424194878Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:24:29.424482 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:24:29.814113 systemd[1]: Started sshd@9-10.128.0.67:22-211.22.222.251:46991.service - OpenSSH per-connection server daemon (211.22.222.251:46991). Mar 14 00:24:30.040372 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1708885626-merged.mount: Deactivated successfully. Mar 14 00:24:30.230743 containerd[1462]: time="2026-03-14T00:24:30.230226872Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:24:30.359745 sshd[1887]: Connection closed by 211.22.222.251 port 46991 Mar 14 00:24:30.360889 systemd[1]: sshd@9-10.128.0.67:22-211.22.222.251:46991.service: Deactivated successfully. Mar 14 00:24:30.812188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount725587938.mount: Deactivated successfully. Mar 14 00:24:32.369987 containerd[1462]: time="2026-03-14T00:24:32.369907269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:32.371619 containerd[1462]: time="2026-03-14T00:24:32.371560200Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27697898" Mar 14 00:24:32.373502 containerd[1462]: time="2026-03-14T00:24:32.373428824Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:32.377980 containerd[1462]: time="2026-03-14T00:24:32.377887757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:32.380251 containerd[1462]: time="2026-03-14T00:24:32.379899180Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.149561868s" Mar 14 00:24:32.380251 containerd[1462]: time="2026-03-14T00:24:32.379961039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 14 00:24:32.381036 containerd[1462]: time="2026-03-14T00:24:32.380993071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:24:33.737279 containerd[1462]: time="2026-03-14T00:24:33.737207544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:33.739119 containerd[1462]: time="2026-03-14T00:24:33.738991079Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450946" Mar 14 00:24:33.740241 containerd[1462]: time="2026-03-14T00:24:33.740160516Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:33.744413 containerd[1462]: time="2026-03-14T00:24:33.744203445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:33.746198 containerd[1462]: time="2026-03-14T00:24:33.746014740Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.364980648s" Mar 14 00:24:33.746198 containerd[1462]: time="2026-03-14T00:24:33.746082243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 14 00:24:33.747202 containerd[1462]: time="2026-03-14T00:24:33.747054167Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:24:34.873515 containerd[1462]: time="2026-03-14T00:24:34.873446274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:34.875120 containerd[1462]: time="2026-03-14T00:24:34.875056296Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548657" Mar 14 00:24:34.876511 containerd[1462]: time="2026-03-14T00:24:34.876420234Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:34.880571 containerd[1462]: time="2026-03-14T00:24:34.880504452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:34.882198 containerd[1462]: time="2026-03-14T00:24:34.882008337Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.134911325s" Mar 14 00:24:34.882198 containerd[1462]: time="2026-03-14T00:24:34.882060846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 14 00:24:34.883198 containerd[1462]: time="2026-03-14T00:24:34.883158354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:24:35.982741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2264705208.mount: Deactivated successfully. Mar 14 00:24:36.360599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:24:36.370835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:36.659995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:36.663234 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:24:36.715420 containerd[1462]: time="2026-03-14T00:24:36.715355531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:36.717629 containerd[1462]: time="2026-03-14T00:24:36.717457128Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685639" Mar 14 00:24:36.719205 containerd[1462]: time="2026-03-14T00:24:36.719160831Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:36.723397 kubelet[1974]: E0314 00:24:36.723302 1974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:24:36.724397 containerd[1462]: time="2026-03-14T00:24:36.723986318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:36.726023 containerd[1462]: time="2026-03-14T00:24:36.725832800Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.842629874s" Mar 14 00:24:36.726023 containerd[1462]: time="2026-03-14T00:24:36.725892133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 14 00:24:36.725930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:24:36.726593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:24:36.727013 containerd[1462]: time="2026-03-14T00:24:36.726959534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:24:37.182510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057993336.mount: Deactivated successfully. Mar 14 00:24:38.515634 containerd[1462]: time="2026-03-14T00:24:38.515561846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:38.517326 containerd[1462]: time="2026-03-14T00:24:38.517253757Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23558108" Mar 14 00:24:38.518542 containerd[1462]: time="2026-03-14T00:24:38.518469041Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:38.522384 containerd[1462]: time="2026-03-14T00:24:38.522322722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:38.524096 containerd[1462]: time="2026-03-14T00:24:38.523921785Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.796920345s" Mar 14 00:24:38.524096 containerd[1462]: time="2026-03-14T00:24:38.523967667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 14 00:24:38.525380 containerd[1462]: time="2026-03-14T00:24:38.525345294Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:24:38.921981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount74350741.mount: Deactivated successfully. Mar 14 00:24:38.930240 containerd[1462]: time="2026-03-14T00:24:38.930168925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:38.931587 containerd[1462]: time="2026-03-14T00:24:38.931517032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321428" Mar 14 00:24:38.932833 containerd[1462]: time="2026-03-14T00:24:38.932743541Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:38.936592 containerd[1462]: time="2026-03-14T00:24:38.936508321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:38.938866 containerd[1462]: time="2026-03-14T00:24:38.937861640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 412.469073ms" Mar 14 00:24:38.938866 containerd[1462]: time="2026-03-14T00:24:38.937921857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:24:38.938866 containerd[1462]: time="2026-03-14T00:24:38.938732849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:24:39.372025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233544870.mount: Deactivated successfully. Mar 14 00:24:40.630561 containerd[1462]: time="2026-03-14T00:24:40.630255905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:40.631639 containerd[1462]: time="2026-03-14T00:24:40.631538634Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23631199" Mar 14 00:24:40.632957 containerd[1462]: time="2026-03-14T00:24:40.632881573Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:40.639192 containerd[1462]: time="2026-03-14T00:24:40.639112559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:40.640832 containerd[1462]: time="2026-03-14T00:24:40.640596728Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.7018263s" Mar 14 00:24:40.640832 containerd[1462]: time="2026-03-14T00:24:40.640665801Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 14 00:24:42.183162 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:24:42.600823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:42.608107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:42.661999 systemd[1]: Reloading requested from client PID 2132 ('systemctl') (unit session-9.scope)... Mar 14 00:24:42.662019 systemd[1]: Reloading... Mar 14 00:24:42.823759 zram_generator::config[2174]: No configuration found. Mar 14 00:24:42.990577 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:24:43.096292 systemd[1]: Reloading finished in 433 ms. Mar 14 00:24:43.164584 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:24:43.164879 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:24:43.165256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:43.172227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:43.503579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:43.516325 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:24:43.579101 kubelet[2224]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:24:43.862805 kubelet[2224]: I0314 00:24:43.860662 2224 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:24:43.862805 kubelet[2224]: I0314 00:24:43.861071 2224 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:24:43.862805 kubelet[2224]: I0314 00:24:43.861106 2224 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:24:43.862805 kubelet[2224]: I0314 00:24:43.861115 2224 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:24:43.862805 kubelet[2224]: I0314 00:24:43.861507 2224 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:24:43.869510 kubelet[2224]: E0314 00:24:43.869456 2224 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:24:43.870615 kubelet[2224]: I0314 00:24:43.870506 2224 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:24:43.877055 kubelet[2224]: E0314 00:24:43.876975 2224 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:24:43.877055 kubelet[2224]: I0314 00:24:43.877045 2224 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:24:43.880943 kubelet[2224]: I0314 00:24:43.880914 2224 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:24:43.882441 kubelet[2224]: I0314 00:24:43.882363 2224 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:24:43.882689 kubelet[2224]: I0314 00:24:43.882416 2224 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:24:43.882689 kubelet[2224]: I0314 00:24:43.882680 2224 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:24:43.882964 kubelet[2224]: I0314 00:24:43.882721 2224 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:24:43.882964 kubelet[2224]: I0314 00:24:43.882876 2224 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:24:43.885160 kubelet[2224]: I0314 00:24:43.885135 2224 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:24:43.885434 kubelet[2224]: I0314 00:24:43.885414 2224 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:24:43.885518 kubelet[2224]: I0314 00:24:43.885445 2224 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:24:43.885518 kubelet[2224]: I0314 00:24:43.885486 2224 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:24:43.885518 kubelet[2224]: I0314 00:24:43.885502 2224 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:24:43.889730 kubelet[2224]: I0314 00:24:43.889650 2224 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:24:43.894595 kubelet[2224]: I0314 00:24:43.894021 2224 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:24:43.894595 kubelet[2224]: I0314 00:24:43.894087 2224 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:24:43.894595 kubelet[2224]: W0314 00:24:43.894191 2224 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:24:43.912319 kubelet[2224]: I0314 00:24:43.912056 2224 server.go:1257] "Started kubelet" Mar 14 00:24:43.914976 kubelet[2224]: I0314 00:24:43.914942 2224 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:24:43.917052 kubelet[2224]: I0314 00:24:43.916995 2224 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:24:43.926401 kubelet[2224]: I0314 00:24:43.924363 2224 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:24:43.926401 kubelet[2224]: E0314 00:24:43.922578 2224 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da.189c8d744c3e3e76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da,},FirstTimestamp:2026-03-14 00:24:43.91200319 +0000 UTC m=+0.390572202,LastTimestamp:2026-03-14 00:24:43.91200319 +0000 UTC m=+0.390572202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da,}" Mar 14 00:24:43.929269 kubelet[2224]: I0314 00:24:43.929173 2224 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:24:43.929383 kubelet[2224]: I0314 00:24:43.929301 2224 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:24:43.930232 kubelet[2224]: I0314 00:24:43.930153 2224 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:24:43.931793 kubelet[2224]: I0314 00:24:43.931757 2224 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:24:43.936489 kubelet[2224]: I0314 00:24:43.935639 2224 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:24:43.936489 kubelet[2224]: E0314 00:24:43.935866 2224 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" Mar 14 00:24:43.938150 kubelet[2224]: I0314 00:24:43.938120 2224 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:24:43.938310 kubelet[2224]: I0314 00:24:43.938285 2224 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:24:43.939671 kubelet[2224]: E0314 00:24:43.939609 2224 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="200ms" Mar 14 00:24:43.941577 kubelet[2224]: I0314 00:24:43.940155 2224 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:24:43.941577 kubelet[2224]: I0314 00:24:43.940252 2224 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:24:43.942958 kubelet[2224]: I0314 00:24:43.942928 2224 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:24:43.952747 kubelet[2224]: E0314 00:24:43.952624 2224 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:24:43.969324 kubelet[2224]: I0314 00:24:43.969245 2224 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:24:43.976760 kubelet[2224]: I0314 00:24:43.975975 2224 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:24:43.976760 kubelet[2224]: I0314 00:24:43.976009 2224 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:24:43.976760 kubelet[2224]: I0314 00:24:43.976039 2224 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:24:43.976760 kubelet[2224]: E0314 00:24:43.976119 2224 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:24:43.991452 kubelet[2224]: I0314 00:24:43.991419 2224 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:24:43.991452 kubelet[2224]: I0314 00:24:43.991445 2224 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:24:43.991452 kubelet[2224]: I0314 00:24:43.991474 2224 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:24:43.994392 kubelet[2224]: I0314 00:24:43.994362 2224 policy_none.go:50] "Start" Mar 14 00:24:43.994935 kubelet[2224]: I0314 00:24:43.994548 2224 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:24:43.994935 kubelet[2224]: I0314 00:24:43.994575 2224 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:24:43.997633 kubelet[2224]: I0314 00:24:43.996639 2224 policy_none.go:44] "Start" Mar 14 00:24:44.003056 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:24:44.016845 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:24:44.029865 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:24:44.032593 kubelet[2224]: E0314 00:24:44.032550 2224 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:24:44.032920 kubelet[2224]: I0314 00:24:44.032894 2224 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:24:44.033024 kubelet[2224]: I0314 00:24:44.032922 2224 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:24:44.034350 kubelet[2224]: I0314 00:24:44.033850 2224 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:24:44.036835 kubelet[2224]: E0314 00:24:44.036725 2224 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:24:44.036835 kubelet[2224]: E0314 00:24:44.036805 2224 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" Mar 14 00:24:44.101005 systemd[1]: Created slice kubepods-burstable-podfb6d0132775702c567c18d568e9b0f11.slice - libcontainer container kubepods-burstable-podfb6d0132775702c567c18d568e9b0f11.slice. Mar 14 00:24:44.112150 kubelet[2224]: E0314 00:24:44.112087 2224 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.118066 systemd[1]: Created slice kubepods-burstable-pod04782c59c8bba4c76111789a9673835f.slice - libcontainer container kubepods-burstable-pod04782c59c8bba4c76111789a9673835f.slice. Mar 14 00:24:44.122923 kubelet[2224]: E0314 00:24:44.122631 2224 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.134828 systemd[1]: Created slice kubepods-burstable-podc13b4a723311b93107b17211567500a5.slice - libcontainer container kubepods-burstable-podc13b4a723311b93107b17211567500a5.slice. Mar 14 00:24:44.138179 kubelet[2224]: E0314 00:24:44.137889 2224 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.138179 kubelet[2224]: I0314 00:24:44.137973 2224 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.138632 kubelet[2224]: E0314 00:24:44.138596 2224 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140267 kubelet[2224]: I0314 00:24:44.139779 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140267 kubelet[2224]: I0314 00:24:44.139826 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140267 kubelet[2224]: I0314 00:24:44.139872 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140267 kubelet[2224]: I0314 00:24:44.139900 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04782c59c8bba4c76111789a9673835f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"04782c59c8bba4c76111789a9673835f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140532 kubelet[2224]: I0314 00:24:44.139941 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04782c59c8bba4c76111789a9673835f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"04782c59c8bba4c76111789a9673835f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140532 kubelet[2224]: I0314 00:24:44.139967 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140532 kubelet[2224]: I0314 00:24:44.139993 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140532 kubelet[2224]: I0314 00:24:44.140039 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb6d0132775702c567c18d568e9b0f11-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"fb6d0132775702c567c18d568e9b0f11\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140661 kubelet[2224]: I0314 00:24:44.140099 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04782c59c8bba4c76111789a9673835f-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"04782c59c8bba4c76111789a9673835f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.140661 kubelet[2224]: E0314 00:24:44.140317 2224 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="400ms" Mar 14 00:24:44.342760 kubelet[2224]: I0314 00:24:44.342722 2224 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.343225 kubelet[2224]: E0314 00:24:44.343175 2224 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.417852 containerd[1462]: time="2026-03-14T00:24:44.417420130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da,Uid:fb6d0132775702c567c18d568e9b0f11,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:44.431291 containerd[1462]: time="2026-03-14T00:24:44.431224939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da,Uid:04782c59c8bba4c76111789a9673835f,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:44.447337 containerd[1462]: time="2026-03-14T00:24:44.447091452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da,Uid:c13b4a723311b93107b17211567500a5,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:44.541989 kubelet[2224]: E0314 00:24:44.541911 2224 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="800ms" Mar 14 00:24:44.747801 kubelet[2224]: I0314 00:24:44.747654 2224 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.748861 kubelet[2224]: E0314 00:24:44.748551 2224 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:44.876944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203692138.mount: Deactivated successfully. Mar 14 00:24:44.885733 containerd[1462]: time="2026-03-14T00:24:44.885650214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:44.888186 containerd[1462]: time="2026-03-14T00:24:44.888116160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:24:44.889505 containerd[1462]: time="2026-03-14T00:24:44.889438752Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:44.891248 containerd[1462]: time="2026-03-14T00:24:44.891185177Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:44.892677 containerd[1462]: time="2026-03-14T00:24:44.892632541Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:44.893821 containerd[1462]: time="2026-03-14T00:24:44.893756959Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312266" Mar 14 00:24:44.894724 containerd[1462]: time="2026-03-14T00:24:44.894489592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:24:44.897614 containerd[1462]: time="2026-03-14T00:24:44.897496300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:44.899743 containerd[1462]: time="2026-03-14T00:24:44.898991307Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.671809ms" Mar 14 00:24:44.900558 containerd[1462]: time="2026-03-14T00:24:44.900500014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 482.973967ms" Mar 14 00:24:44.905904 containerd[1462]: time="2026-03-14T00:24:44.905849590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 458.643437ms" Mar 14 00:24:45.144449 containerd[1462]: time="2026-03-14T00:24:45.142655966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:45.144449 containerd[1462]: time="2026-03-14T00:24:45.142767577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:45.144449 containerd[1462]: time="2026-03-14T00:24:45.142842822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:45.144449 containerd[1462]: time="2026-03-14T00:24:45.142979358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:45.149315 containerd[1462]: time="2026-03-14T00:24:45.148932305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:45.149315 containerd[1462]: time="2026-03-14T00:24:45.149029239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:45.149315 containerd[1462]: time="2026-03-14T00:24:45.149058962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:45.149315 containerd[1462]: time="2026-03-14T00:24:45.149198302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:45.155356 containerd[1462]: time="2026-03-14T00:24:45.154805444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:45.155356 containerd[1462]: time="2026-03-14T00:24:45.154905075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:45.155356 containerd[1462]: time="2026-03-14T00:24:45.155028490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:45.155693 containerd[1462]: time="2026-03-14T00:24:45.155612034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:45.198979 systemd[1]: Started cri-containerd-bbd136ec2584db9de664b2e8cf574485b29b108814202658f66eeb6785bba8f7.scope - libcontainer container bbd136ec2584db9de664b2e8cf574485b29b108814202658f66eeb6785bba8f7. Mar 14 00:24:45.209944 systemd[1]: Started cri-containerd-6973a0ae467ebe781365df919fcf9d4ae1b799b35f514c3d57f7b9e0801c5c69.scope - libcontainer container 6973a0ae467ebe781365df919fcf9d4ae1b799b35f514c3d57f7b9e0801c5c69. Mar 14 00:24:45.213747 systemd[1]: Started cri-containerd-cd00fc194e6f67a875f454e9f3c39a1bd0eefed8ec0fe99a5cc9f69623d1c240.scope - libcontainer container cd00fc194e6f67a875f454e9f3c39a1bd0eefed8ec0fe99a5cc9f69623d1c240. Mar 14 00:24:45.328438 containerd[1462]: time="2026-03-14T00:24:45.327910293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da,Uid:fb6d0132775702c567c18d568e9b0f11,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbd136ec2584db9de664b2e8cf574485b29b108814202658f66eeb6785bba8f7\"" Mar 14 00:24:45.332375 kubelet[2224]: E0314 00:24:45.332278 2224 kubelet_pods.go:562] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f9960" Mar 14 00:24:45.341388 containerd[1462]: time="2026-03-14T00:24:45.340085750Z" level=info msg="CreateContainer within sandbox \"bbd136ec2584db9de664b2e8cf574485b29b108814202658f66eeb6785bba8f7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:24:45.342753 kubelet[2224]: E0314 00:24:45.342586 2224 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="1.6s" Mar 14 00:24:45.343262 containerd[1462]: time="2026-03-14T00:24:45.343219092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da,Uid:04782c59c8bba4c76111789a9673835f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6973a0ae467ebe781365df919fcf9d4ae1b799b35f514c3d57f7b9e0801c5c69\"" Mar 14 00:24:45.349757 containerd[1462]: time="2026-03-14T00:24:45.349316282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da,Uid:c13b4a723311b93107b17211567500a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd00fc194e6f67a875f454e9f3c39a1bd0eefed8ec0fe99a5cc9f69623d1c240\"" Mar 14 00:24:45.349885 kubelet[2224]: E0314 00:24:45.349352 2224 kubelet_pods.go:562] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f9960" Mar 14 00:24:45.354414 kubelet[2224]: E0314 00:24:45.354360 2224 kubelet_pods.go:562] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01c" Mar 14 00:24:45.356326 containerd[1462]: time="2026-03-14T00:24:45.356265419Z" level=info msg="CreateContainer within sandbox \"6973a0ae467ebe781365df919fcf9d4ae1b799b35f514c3d57f7b9e0801c5c69\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:24:45.358444 containerd[1462]: time="2026-03-14T00:24:45.358379195Z" level=info msg="CreateContainer within sandbox \"cd00fc194e6f67a875f454e9f3c39a1bd0eefed8ec0fe99a5cc9f69623d1c240\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:24:45.372264 containerd[1462]: time="2026-03-14T00:24:45.372017853Z" level=info msg="CreateContainer within sandbox \"bbd136ec2584db9de664b2e8cf574485b29b108814202658f66eeb6785bba8f7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57f7fe1965e2d304bb671e482f7addd890d03d958b024d48157352afb0d0d76b\"" Mar 14 00:24:45.374035 containerd[1462]: time="2026-03-14T00:24:45.373306207Z" level=info msg="StartContainer for \"57f7fe1965e2d304bb671e482f7addd890d03d958b024d48157352afb0d0d76b\"" Mar 14 00:24:45.384315 containerd[1462]: time="2026-03-14T00:24:45.384247173Z" level=info msg="CreateContainer within sandbox \"6973a0ae467ebe781365df919fcf9d4ae1b799b35f514c3d57f7b9e0801c5c69\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1cf4db392cf4e93bb4637eee8e2e55a4be0dacc6249993edd31f99c1992a4215\"" Mar 14 00:24:45.386731 containerd[1462]: time="2026-03-14T00:24:45.385724880Z" level=info msg="StartContainer for \"1cf4db392cf4e93bb4637eee8e2e55a4be0dacc6249993edd31f99c1992a4215\"" Mar 14 00:24:45.392419 containerd[1462]: time="2026-03-14T00:24:45.392347609Z" level=info msg="CreateContainer within sandbox \"cd00fc194e6f67a875f454e9f3c39a1bd0eefed8ec0fe99a5cc9f69623d1c240\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65d2a82a556f845838b9d0ddec6954aea09dbe0e33e67b4567ef931dfda4a4e9\"" Mar 14 00:24:45.393143 containerd[1462]: time="2026-03-14T00:24:45.393107166Z" level=info msg="StartContainer for \"65d2a82a556f845838b9d0ddec6954aea09dbe0e33e67b4567ef931dfda4a4e9\"" Mar 14 00:24:45.440120 systemd[1]: Started cri-containerd-57f7fe1965e2d304bb671e482f7addd890d03d958b024d48157352afb0d0d76b.scope - libcontainer container 57f7fe1965e2d304bb671e482f7addd890d03d958b024d48157352afb0d0d76b. Mar 14 00:24:45.456150 systemd[1]: Started cri-containerd-1cf4db392cf4e93bb4637eee8e2e55a4be0dacc6249993edd31f99c1992a4215.scope - libcontainer container 1cf4db392cf4e93bb4637eee8e2e55a4be0dacc6249993edd31f99c1992a4215. Mar 14 00:24:45.470958 systemd[1]: Started cri-containerd-65d2a82a556f845838b9d0ddec6954aea09dbe0e33e67b4567ef931dfda4a4e9.scope - libcontainer container 65d2a82a556f845838b9d0ddec6954aea09dbe0e33e67b4567ef931dfda4a4e9. Mar 14 00:24:45.562739 containerd[1462]: time="2026-03-14T00:24:45.562660253Z" level=info msg="StartContainer for \"57f7fe1965e2d304bb671e482f7addd890d03d958b024d48157352afb0d0d76b\" returns successfully" Mar 14 00:24:45.565185 kubelet[2224]: I0314 00:24:45.564906 2224 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:45.566015 kubelet[2224]: E0314 00:24:45.565887 2224 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:45.585393 containerd[1462]: time="2026-03-14T00:24:45.585203242Z" level=info msg="StartContainer for \"1cf4db392cf4e93bb4637eee8e2e55a4be0dacc6249993edd31f99c1992a4215\" returns successfully" Mar 14 00:24:45.601122 containerd[1462]: time="2026-03-14T00:24:45.600963816Z" level=info msg="StartContainer for \"65d2a82a556f845838b9d0ddec6954aea09dbe0e33e67b4567ef931dfda4a4e9\" returns successfully" Mar 14 00:24:45.994730 kubelet[2224]: E0314 00:24:45.994198 2224 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:45.996603 kubelet[2224]: E0314 00:24:45.996573 2224 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:46.001094 kubelet[2224]: E0314 00:24:46.001063 2224 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.005149 kubelet[2224]: E0314 00:24:47.005097 2224 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.006306 kubelet[2224]: E0314 00:24:47.006266 2224 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.173589 kubelet[2224]: I0314 00:24:47.173540 2224 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.755363 kubelet[2224]: E0314 00:24:47.755310 2224 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" not found" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.852202 kubelet[2224]: I0314 00:24:47.852088 2224 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.891020 kubelet[2224]: I0314 00:24:47.890960 2224 apiserver.go:52] "Watching apiserver" Mar 14 00:24:47.936842 kubelet[2224]: I0314 00:24:47.936791 2224 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.937923 kubelet[2224]: I0314 00:24:47.937876 2224 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:24:47.942642 kubelet[2224]: E0314 00:24:47.942605 2224 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.942642 kubelet[2224]: I0314 00:24:47.942645 2224 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.944591 kubelet[2224]: E0314 00:24:47.944552 2224 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.944591 kubelet[2224]: I0314 00:24:47.944594 2224 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:47.947339 kubelet[2224]: E0314 00:24:47.947306 2224 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:49.957596 kubelet[2224]: I0314 00:24:49.957542 2224 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:49.965184 kubelet[2224]: I0314 00:24:49.965130 2224 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 14 00:24:50.005141 systemd[1]: Reloading requested from client PID 2506 ('systemctl') (unit session-9.scope)... Mar 14 00:24:50.005624 systemd[1]: Reloading... Mar 14 00:24:50.162731 zram_generator::config[2549]: No configuration found. Mar 14 00:24:50.296104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:24:50.425049 systemd[1]: Reloading finished in 418 ms. Mar 14 00:24:50.486591 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:50.504046 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:24:50.504396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:50.512122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:50.816017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:50.826351 (kubelet)[2594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:24:50.898412 kubelet[2594]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:24:50.908120 kubelet[2594]: I0314 00:24:50.908038 2594 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:24:50.908120 kubelet[2594]: I0314 00:24:50.908119 2594 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:24:50.908357 kubelet[2594]: I0314 00:24:50.908145 2594 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:24:50.908357 kubelet[2594]: I0314 00:24:50.908153 2594 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:24:50.908664 kubelet[2594]: I0314 00:24:50.908625 2594 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:24:50.910848 kubelet[2594]: I0314 00:24:50.910816 2594 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:24:50.914029 kubelet[2594]: I0314 00:24:50.913762 2594 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:24:50.920734 kubelet[2594]: E0314 00:24:50.920570 2594 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:24:50.920734 kubelet[2594]: I0314 00:24:50.920636 2594 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:24:50.924642 kubelet[2594]: I0314 00:24:50.924615 2594 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:24:50.925065 kubelet[2594]: I0314 00:24:50.925004 2594 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:24:50.925292 kubelet[2594]: I0314 00:24:50.925046 2594 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:24:50.925292 kubelet[2594]: I0314 00:24:50.925289 2594 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:24:50.925506 kubelet[2594]: I0314 00:24:50.925303 2594 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:24:50.925506 kubelet[2594]: I0314 00:24:50.925338 2594 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:24:50.925661 kubelet[2594]: I0314 00:24:50.925635 2594 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:24:50.927809 kubelet[2594]: I0314 00:24:50.925894 2594 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:24:50.927809 kubelet[2594]: I0314 00:24:50.925925 2594 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:24:50.927809 kubelet[2594]: I0314 00:24:50.925957 2594 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:24:50.927809 kubelet[2594]: I0314 00:24:50.925972 2594 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:24:50.928728 kubelet[2594]: I0314 00:24:50.928435 2594 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:24:50.929811 kubelet[2594]: I0314 00:24:50.929781 2594 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:24:50.929886 kubelet[2594]: I0314 00:24:50.929836 2594 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:24:50.962618 kubelet[2594]: I0314 00:24:50.961371 2594 server.go:1257] "Started kubelet" Mar 14 00:24:50.962618 kubelet[2594]: I0314 00:24:50.961872 2594 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:24:50.962618 kubelet[2594]: I0314 00:24:50.961970 2594 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:24:50.962618 kubelet[2594]: I0314 00:24:50.962062 2594 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:24:50.962618 kubelet[2594]: I0314 00:24:50.962420 2594 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:24:50.965044 kubelet[2594]: I0314 00:24:50.964939 2594 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:24:50.971104 kubelet[2594]: I0314 00:24:50.971069 2594 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:24:50.976592 kubelet[2594]: I0314 00:24:50.976117 2594 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:24:50.976592 kubelet[2594]: I0314 00:24:50.971307 2594 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:24:50.976592 kubelet[2594]: I0314 00:24:50.976366 2594 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:24:50.977245 kubelet[2594]: I0314 00:24:50.977136 2594 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:24:50.990112 kubelet[2594]: I0314 00:24:50.990071 2594 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:24:50.990275 kubelet[2594]: I0314 00:24:50.990216 2594 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:24:51.005346 kubelet[2594]: I0314 00:24:51.003799 2594 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:24:51.006612 kubelet[2594]: E0314 00:24:51.006520 2594 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:24:51.024232 kubelet[2594]: I0314 00:24:51.024047 2594 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:24:51.048499 kubelet[2594]: I0314 00:24:51.048461 2594 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:24:51.049155 kubelet[2594]: I0314 00:24:51.048683 2594 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:24:51.049155 kubelet[2594]: I0314 00:24:51.048749 2594 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:24:51.049155 kubelet[2594]: E0314 00:24:51.048818 2594 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:24:51.088825 kubelet[2594]: I0314 00:24:51.088679 2594 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:24:51.088825 kubelet[2594]: I0314 00:24:51.088719 2594 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:24:51.088825 kubelet[2594]: I0314 00:24:51.088757 2594 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:24:51.090177 kubelet[2594]: I0314 00:24:51.088949 2594 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:24:51.090177 kubelet[2594]: I0314 00:24:51.088968 2594 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:24:51.090177 kubelet[2594]: I0314 00:24:51.088998 2594 policy_none.go:50] "Start" Mar 14 00:24:51.090177 kubelet[2594]: I0314 00:24:51.089012 2594 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:24:51.090177 kubelet[2594]: I0314 00:24:51.089030 2594 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:24:51.090177 kubelet[2594]: I0314 00:24:51.089189 2594 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:24:51.090177 kubelet[2594]: I0314 00:24:51.089202 2594 policy_none.go:44] "Start" Mar 14 00:24:51.103151 kubelet[2594]: E0314 00:24:51.101461 2594 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:24:51.103151 kubelet[2594]: I0314 00:24:51.101760 2594 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:24:51.103151 kubelet[2594]: I0314 00:24:51.101778 2594 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:24:51.103151 kubelet[2594]: I0314 00:24:51.102289 2594 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:24:51.110386 kubelet[2594]: E0314 00:24:51.110325 2594 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:24:51.152741 kubelet[2594]: I0314 00:24:51.151938 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.157200 kubelet[2594]: I0314 00:24:51.156930 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.157572 kubelet[2594]: I0314 00:24:51.157545 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.170971 kubelet[2594]: I0314 00:24:51.170927 2594 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 14 00:24:51.173783 kubelet[2594]: I0314 00:24:51.173728 2594 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 14 00:24:51.176529 kubelet[2594]: I0314 00:24:51.176496 2594 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 14 00:24:51.176664 kubelet[2594]: E0314 00:24:51.176565 2594 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.179932 kubelet[2594]: I0314 00:24:51.179898 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04782c59c8bba4c76111789a9673835f-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"04782c59c8bba4c76111789a9673835f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.180074 kubelet[2594]: I0314 00:24:51.179941 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04782c59c8bba4c76111789a9673835f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"04782c59c8bba4c76111789a9673835f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.180074 kubelet[2594]: I0314 00:24:51.179971 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04782c59c8bba4c76111789a9673835f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"04782c59c8bba4c76111789a9673835f\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.180074 kubelet[2594]: I0314 00:24:51.180027 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.180243 kubelet[2594]: I0314 00:24:51.180099 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.180243 kubelet[2594]: I0314 00:24:51.180138 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.180243 kubelet[2594]: I0314 00:24:51.180198 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb6d0132775702c567c18d568e9b0f11-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"fb6d0132775702c567c18d568e9b0f11\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.180243 kubelet[2594]: I0314 00:24:51.180239 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.180450 kubelet[2594]: I0314 00:24:51.180302 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c13b4a723311b93107b17211567500a5-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" (UID: \"c13b4a723311b93107b17211567500a5\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.227137 kubelet[2594]: I0314 00:24:51.226384 2594 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.239538 kubelet[2594]: I0314 00:24:51.239381 2594 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.239747 kubelet[2594]: I0314 00:24:51.239573 2594 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:51.927979 kubelet[2594]: I0314 00:24:51.926479 2594 apiserver.go:52] "Watching apiserver" Mar 14 00:24:51.976646 kubelet[2594]: I0314 00:24:51.976576 2594 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:24:52.040053 kubelet[2594]: I0314 00:24:52.039916 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" podStartSLOduration=1.039896811 podStartE2EDuration="1.039896811s" podCreationTimestamp="2026-03-14 00:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:52.039101509 +0000 UTC m=+1.204588783" watchObservedRunningTime="2026-03-14 00:24:52.039896811 +0000 UTC m=+1.205384086" Mar 14 00:24:52.053404 kubelet[2594]: I0314 00:24:52.053326 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" podStartSLOduration=1.053302624 podStartE2EDuration="1.053302624s" podCreationTimestamp="2026-03-14 00:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:52.052467143 +0000 UTC m=+1.217954418" watchObservedRunningTime="2026-03-14 00:24:52.053302624 +0000 UTC m=+1.218789891" Mar 14 00:24:52.068772 kubelet[2594]: I0314 00:24:52.067746 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" podStartSLOduration=3.067721329 podStartE2EDuration="3.067721329s" podCreationTimestamp="2026-03-14 00:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:52.067424703 +0000 UTC m=+1.232911975" watchObservedRunningTime="2026-03-14 00:24:52.067721329 +0000 UTC m=+1.233208599" Mar 14 00:24:52.073731 kubelet[2594]: I0314 00:24:52.073534 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:52.074505 kubelet[2594]: I0314 00:24:52.074477 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:52.088360 kubelet[2594]: I0314 00:24:52.087984 2594 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 14 00:24:52.088360 kubelet[2594]: E0314 00:24:52.088067 2594 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:52.089745 kubelet[2594]: I0314 00:24:52.089715 2594 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 14 00:24:52.091200 kubelet[2594]: E0314 00:24:52.089941 2594 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:24:56.082338 kubelet[2594]: I0314 00:24:56.082282 2594 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:24:56.083075 containerd[1462]: time="2026-03-14T00:24:56.082841150Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:24:56.083496 kubelet[2594]: I0314 00:24:56.083085 2594 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:24:56.327198 update_engine[1446]: I20260314 00:24:56.325886 1446 update_attempter.cc:509] Updating boot flags... Mar 14 00:24:56.397747 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2652) Mar 14 00:24:56.538999 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2652) Mar 14 00:24:56.664835 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2652) Mar 14 00:24:56.922819 kubelet[2594]: I0314 00:24:56.920832 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1e6f75c-b39e-43a4-a580-c228606f8431-kube-proxy\") pod \"kube-proxy-77gsp\" (UID: \"a1e6f75c-b39e-43a4-a580-c228606f8431\") " pod="kube-system/kube-proxy-77gsp" Mar 14 00:24:56.922819 kubelet[2594]: I0314 00:24:56.920877 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1e6f75c-b39e-43a4-a580-c228606f8431-xtables-lock\") pod \"kube-proxy-77gsp\" (UID: \"a1e6f75c-b39e-43a4-a580-c228606f8431\") " pod="kube-system/kube-proxy-77gsp" Mar 14 00:24:56.922819 kubelet[2594]: I0314 00:24:56.920902 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1e6f75c-b39e-43a4-a580-c228606f8431-lib-modules\") pod \"kube-proxy-77gsp\" (UID: \"a1e6f75c-b39e-43a4-a580-c228606f8431\") " pod="kube-system/kube-proxy-77gsp" Mar 14 00:24:56.922819 kubelet[2594]: I0314 00:24:56.920928 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d98w9\" (UniqueName: \"kubernetes.io/projected/a1e6f75c-b39e-43a4-a580-c228606f8431-kube-api-access-d98w9\") pod \"kube-proxy-77gsp\" (UID: \"a1e6f75c-b39e-43a4-a580-c228606f8431\") " pod="kube-system/kube-proxy-77gsp" Mar 14 00:24:56.927105 systemd[1]: Created slice kubepods-besteffort-poda1e6f75c_b39e_43a4_a580_c228606f8431.slice - libcontainer container kubepods-besteffort-poda1e6f75c_b39e_43a4_a580_c228606f8431.slice. Mar 14 00:24:56.927862 kubelet[2594]: E0314 00:24:56.927086 2594 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-77gsp\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' and this object" podUID="a1e6f75c-b39e-43a4-a580-c228606f8431" pod="kube-system/kube-proxy-77gsp" Mar 14 00:24:57.242106 containerd[1462]: time="2026-03-14T00:24:57.241955308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77gsp,Uid:a1e6f75c-b39e-43a4-a580-c228606f8431,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:57.277192 containerd[1462]: time="2026-03-14T00:24:57.277067050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:57.277693 containerd[1462]: time="2026-03-14T00:24:57.277168700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:57.277693 containerd[1462]: time="2026-03-14T00:24:57.277203121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:57.277693 containerd[1462]: time="2026-03-14T00:24:57.277359250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:57.336975 systemd[1]: run-containerd-runc-k8s.io-36286d04187554fedda1cd450962d3f7c5c1076e763356e71a67578801a65149-runc.6RHFvn.mount: Deactivated successfully. Mar 14 00:24:57.350993 systemd[1]: Started cri-containerd-36286d04187554fedda1cd450962d3f7c5c1076e763356e71a67578801a65149.scope - libcontainer container 36286d04187554fedda1cd450962d3f7c5c1076e763356e71a67578801a65149. Mar 14 00:24:57.404783 systemd[1]: Created slice kubepods-besteffort-pod1140a679_d017_43cb_b5d8_69fc73d63b97.slice - libcontainer container kubepods-besteffort-pod1140a679_d017_43cb_b5d8_69fc73d63b97.slice. Mar 14 00:24:57.425837 kubelet[2594]: I0314 00:24:57.425663 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1140a679-d017-43cb-b5d8-69fc73d63b97-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-nkww6\" (UID: \"1140a679-d017-43cb-b5d8-69fc73d63b97\") " pod="tigera-operator/tigera-operator-6cf4cccc57-nkww6" Mar 14 00:24:57.429791 kubelet[2594]: I0314 00:24:57.428248 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4lg8\" (UniqueName: \"kubernetes.io/projected/1140a679-d017-43cb-b5d8-69fc73d63b97-kube-api-access-p4lg8\") pod \"tigera-operator-6cf4cccc57-nkww6\" (UID: \"1140a679-d017-43cb-b5d8-69fc73d63b97\") " pod="tigera-operator/tigera-operator-6cf4cccc57-nkww6" Mar 14 00:24:57.438912 containerd[1462]: time="2026-03-14T00:24:57.438850465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77gsp,Uid:a1e6f75c-b39e-43a4-a580-c228606f8431,Namespace:kube-system,Attempt:0,} returns sandbox id \"36286d04187554fedda1cd450962d3f7c5c1076e763356e71a67578801a65149\"" Mar 14 00:24:57.448182 containerd[1462]: time="2026-03-14T00:24:57.448128340Z" level=info msg="CreateContainer within sandbox \"36286d04187554fedda1cd450962d3f7c5c1076e763356e71a67578801a65149\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:24:57.466449 containerd[1462]: time="2026-03-14T00:24:57.466380038Z" level=info msg="CreateContainer within sandbox \"36286d04187554fedda1cd450962d3f7c5c1076e763356e71a67578801a65149\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b860b117ca368c464d843355013abbf62faf9dcac2ac97df51b962bf02a369e4\"" Mar 14 00:24:57.467658 containerd[1462]: time="2026-03-14T00:24:57.467598958Z" level=info msg="StartContainer for \"b860b117ca368c464d843355013abbf62faf9dcac2ac97df51b962bf02a369e4\"" Mar 14 00:24:57.508964 systemd[1]: Started cri-containerd-b860b117ca368c464d843355013abbf62faf9dcac2ac97df51b962bf02a369e4.scope - libcontainer container b860b117ca368c464d843355013abbf62faf9dcac2ac97df51b962bf02a369e4. Mar 14 00:24:57.563343 containerd[1462]: time="2026-03-14T00:24:57.563160327Z" level=info msg="StartContainer for \"b860b117ca368c464d843355013abbf62faf9dcac2ac97df51b962bf02a369e4\" returns successfully" Mar 14 00:24:57.713627 containerd[1462]: time="2026-03-14T00:24:57.713574400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-nkww6,Uid:1140a679-d017-43cb-b5d8-69fc73d63b97,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:24:57.748654 containerd[1462]: time="2026-03-14T00:24:57.748270988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:57.748654 containerd[1462]: time="2026-03-14T00:24:57.748431428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:57.748654 containerd[1462]: time="2026-03-14T00:24:57.748458766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:57.748654 containerd[1462]: time="2026-03-14T00:24:57.748583677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:57.772939 systemd[1]: Started cri-containerd-f4bbe878f533f61ebd49b4a88eeaf2488dd0ec5dcf35d19ad40db3f4439ea6e2.scope - libcontainer container f4bbe878f533f61ebd49b4a88eeaf2488dd0ec5dcf35d19ad40db3f4439ea6e2. Mar 14 00:24:57.857335 containerd[1462]: time="2026-03-14T00:24:57.857127779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-nkww6,Uid:1140a679-d017-43cb-b5d8-69fc73d63b97,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f4bbe878f533f61ebd49b4a88eeaf2488dd0ec5dcf35d19ad40db3f4439ea6e2\"" Mar 14 00:24:57.861984 containerd[1462]: time="2026-03-14T00:24:57.861950379Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:24:58.111787 kubelet[2594]: I0314 00:24:58.111709 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-77gsp" podStartSLOduration=2.111671217 podStartE2EDuration="2.111671217s" podCreationTimestamp="2026-03-14 00:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:58.110340167 +0000 UTC m=+7.275827439" watchObservedRunningTime="2026-03-14 00:24:58.111671217 +0000 UTC m=+7.277158489" Mar 14 00:24:58.789644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927531247.mount: Deactivated successfully. Mar 14 00:25:00.277356 containerd[1462]: time="2026-03-14T00:25:00.277251007Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:00.278919 containerd[1462]: time="2026-03-14T00:25:00.278837829Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 14 00:25:00.280257 containerd[1462]: time="2026-03-14T00:25:00.280190091Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:00.284527 containerd[1462]: time="2026-03-14T00:25:00.284485576Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:00.286307 containerd[1462]: time="2026-03-14T00:25:00.285599753Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.423442241s" Mar 14 00:25:00.286307 containerd[1462]: time="2026-03-14T00:25:00.285726756Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 14 00:25:00.292104 containerd[1462]: time="2026-03-14T00:25:00.291937690Z" level=info msg="CreateContainer within sandbox \"f4bbe878f533f61ebd49b4a88eeaf2488dd0ec5dcf35d19ad40db3f4439ea6e2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:25:00.308105 containerd[1462]: time="2026-03-14T00:25:00.308025890Z" level=info msg="CreateContainer within sandbox \"f4bbe878f533f61ebd49b4a88eeaf2488dd0ec5dcf35d19ad40db3f4439ea6e2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8611ef1da11de631322825d693a047c94c0268f29e693fe5b615edb405b10a14\"" Mar 14 00:25:00.311504 containerd[1462]: time="2026-03-14T00:25:00.309924709Z" level=info msg="StartContainer for \"8611ef1da11de631322825d693a047c94c0268f29e693fe5b615edb405b10a14\"" Mar 14 00:25:00.359730 systemd[1]: run-containerd-runc-k8s.io-8611ef1da11de631322825d693a047c94c0268f29e693fe5b615edb405b10a14-runc.N3Y0S6.mount: Deactivated successfully. Mar 14 00:25:00.370985 systemd[1]: Started cri-containerd-8611ef1da11de631322825d693a047c94c0268f29e693fe5b615edb405b10a14.scope - libcontainer container 8611ef1da11de631322825d693a047c94c0268f29e693fe5b615edb405b10a14. Mar 14 00:25:00.406296 containerd[1462]: time="2026-03-14T00:25:00.406154078Z" level=info msg="StartContainer for \"8611ef1da11de631322825d693a047c94c0268f29e693fe5b615edb405b10a14\" returns successfully" Mar 14 00:25:03.407189 systemd[1]: Started sshd@10-10.128.0.67:22-80.94.95.115:52712.service - OpenSSH per-connection server daemon (80.94.95.115:52712). Mar 14 00:25:03.789098 kubelet[2594]: I0314 00:25:03.788223 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-nkww6" podStartSLOduration=4.362742319 podStartE2EDuration="6.78820216s" podCreationTimestamp="2026-03-14 00:24:57 +0000 UTC" firstStartedPulling="2026-03-14 00:24:57.861459683 +0000 UTC m=+7.026946949" lastFinishedPulling="2026-03-14 00:25:00.286919529 +0000 UTC m=+9.452406790" observedRunningTime="2026-03-14 00:25:01.122019796 +0000 UTC m=+10.287507069" watchObservedRunningTime="2026-03-14 00:25:03.78820216 +0000 UTC m=+12.953689432" Mar 14 00:25:04.656909 sshd[2961]: Invalid user test from 80.94.95.115 port 52712 Mar 14 00:25:04.897849 sshd[2961]: Connection closed by invalid user test 80.94.95.115 port 52712 [preauth] Mar 14 00:25:04.903442 systemd[1]: sshd@10-10.128.0.67:22-80.94.95.115:52712.service: Deactivated successfully. Mar 14 00:25:07.796554 sudo[1733]: pam_unix(sudo:session): session closed for user root Mar 14 00:25:07.835058 sshd[1730]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:07.846816 systemd[1]: sshd@8-10.128.0.67:22-4.153.228.146:54514.service: Deactivated successfully. Mar 14 00:25:07.847438 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:25:07.855361 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:25:07.857817 systemd[1]: session-9.scope: Consumed 5.029s CPU time, 159.0M memory peak, 0B memory swap peak. Mar 14 00:25:07.862598 systemd-logind[1439]: Removed session 9. Mar 14 00:25:11.890245 systemd[1]: Created slice kubepods-besteffort-podcaa2b884_c1a0_4c42_8775_19d2bb5ba1a3.slice - libcontainer container kubepods-besteffort-podcaa2b884_c1a0_4c42_8775_19d2bb5ba1a3.slice. Mar 14 00:25:11.928318 kubelet[2594]: I0314 00:25:11.928251 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/caa2b884-c1a0-4c42-8775-19d2bb5ba1a3-typha-certs\") pod \"calico-typha-6dffcd5c46-nfxhl\" (UID: \"caa2b884-c1a0-4c42-8775-19d2bb5ba1a3\") " pod="calico-system/calico-typha-6dffcd5c46-nfxhl" Mar 14 00:25:11.931154 kubelet[2594]: I0314 00:25:11.928340 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa2b884-c1a0-4c42-8775-19d2bb5ba1a3-tigera-ca-bundle\") pod \"calico-typha-6dffcd5c46-nfxhl\" (UID: \"caa2b884-c1a0-4c42-8775-19d2bb5ba1a3\") " pod="calico-system/calico-typha-6dffcd5c46-nfxhl" Mar 14 00:25:11.931154 kubelet[2594]: I0314 00:25:11.928368 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdh5n\" (UniqueName: \"kubernetes.io/projected/caa2b884-c1a0-4c42-8775-19d2bb5ba1a3-kube-api-access-gdh5n\") pod \"calico-typha-6dffcd5c46-nfxhl\" (UID: \"caa2b884-c1a0-4c42-8775-19d2bb5ba1a3\") " pod="calico-system/calico-typha-6dffcd5c46-nfxhl" Mar 14 00:25:12.006822 systemd[1]: Created slice kubepods-besteffort-pod1e8a054b_6b29_4810_ab1a_5ad8cedb899f.slice - libcontainer container kubepods-besteffort-pod1e8a054b_6b29_4810_ab1a_5ad8cedb899f.slice. Mar 14 00:25:12.032133 kubelet[2594]: I0314 00:25:12.031887 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-cni-log-dir\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.032464 kubelet[2594]: I0314 00:25:12.032223 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-node-certs\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.032796 kubelet[2594]: I0314 00:25:12.032559 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5cp9\" (UniqueName: \"kubernetes.io/projected/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-kube-api-access-b5cp9\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.033077 kubelet[2594]: I0314 00:25:12.032636 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-bpffs\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.033077 kubelet[2594]: I0314 00:25:12.033021 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-cni-net-dir\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.033623 kubelet[2594]: I0314 00:25:12.033054 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-var-run-calico\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.034099 kubelet[2594]: I0314 00:25:12.033948 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-cni-bin-dir\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.034099 kubelet[2594]: I0314 00:25:12.034035 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-xtables-lock\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.034727 kubelet[2594]: I0314 00:25:12.034544 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-sys-fs\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.034727 kubelet[2594]: I0314 00:25:12.034667 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-lib-modules\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.035372 kubelet[2594]: I0314 00:25:12.034826 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-nodeproc\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.035682 kubelet[2594]: I0314 00:25:12.035564 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-policysync\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.035682 kubelet[2594]: I0314 00:25:12.035633 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-tigera-ca-bundle\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.035984 kubelet[2594]: I0314 00:25:12.035761 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-var-lib-calico\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.036895 kubelet[2594]: I0314 00:25:12.036474 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e8a054b-6b29-4810-ab1a-5ad8cedb899f-flexvol-driver-host\") pod \"calico-node-7stzr\" (UID: \"1e8a054b-6b29-4810-ab1a-5ad8cedb899f\") " pod="calico-system/calico-node-7stzr" Mar 14 00:25:12.117214 kubelet[2594]: E0314 00:25:12.117145 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:12.136999 kubelet[2594]: I0314 00:25:12.136949 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00362120-b53e-4112-9493-f945eb34a049-socket-dir\") pod \"csi-node-driver-6w7hq\" (UID: \"00362120-b53e-4112-9493-f945eb34a049\") " pod="calico-system/csi-node-driver-6w7hq" Mar 14 00:25:12.137270 kubelet[2594]: I0314 00:25:12.137025 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/00362120-b53e-4112-9493-f945eb34a049-varrun\") pod \"csi-node-driver-6w7hq\" (UID: \"00362120-b53e-4112-9493-f945eb34a049\") " pod="calico-system/csi-node-driver-6w7hq" Mar 14 00:25:12.137270 kubelet[2594]: I0314 00:25:12.137156 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00362120-b53e-4112-9493-f945eb34a049-kubelet-dir\") pod \"csi-node-driver-6w7hq\" (UID: \"00362120-b53e-4112-9493-f945eb34a049\") " pod="calico-system/csi-node-driver-6w7hq" Mar 14 00:25:12.137270 kubelet[2594]: I0314 00:25:12.137186 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00362120-b53e-4112-9493-f945eb34a049-registration-dir\") pod \"csi-node-driver-6w7hq\" (UID: \"00362120-b53e-4112-9493-f945eb34a049\") " pod="calico-system/csi-node-driver-6w7hq" Mar 14 00:25:12.137651 kubelet[2594]: I0314 00:25:12.137307 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrh5n\" (UniqueName: \"kubernetes.io/projected/00362120-b53e-4112-9493-f945eb34a049-kube-api-access-mrh5n\") pod \"csi-node-driver-6w7hq\" (UID: \"00362120-b53e-4112-9493-f945eb34a049\") " pod="calico-system/csi-node-driver-6w7hq" Mar 14 00:25:12.145071 kubelet[2594]: E0314 00:25:12.144833 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.147777 kubelet[2594]: W0314 00:25:12.147418 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.147777 kubelet[2594]: E0314 00:25:12.147463 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.151752 kubelet[2594]: E0314 00:25:12.149175 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.151752 kubelet[2594]: W0314 00:25:12.149196 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.151752 kubelet[2594]: E0314 00:25:12.149220 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.157889 kubelet[2594]: E0314 00:25:12.157854 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.157889 kubelet[2594]: W0314 00:25:12.157886 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.158076 kubelet[2594]: E0314 00:25:12.157917 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.181108 kubelet[2594]: E0314 00:25:12.181068 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.181108 kubelet[2594]: W0314 00:25:12.181102 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.181327 kubelet[2594]: E0314 00:25:12.181132 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.207776 containerd[1462]: time="2026-03-14T00:25:12.207685972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dffcd5c46-nfxhl,Uid:caa2b884-c1a0-4c42-8775-19d2bb5ba1a3,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:12.239033 kubelet[2594]: E0314 00:25:12.238839 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.239033 kubelet[2594]: W0314 00:25:12.238872 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.239033 kubelet[2594]: E0314 00:25:12.238904 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.241886 kubelet[2594]: E0314 00:25:12.240204 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.241886 kubelet[2594]: W0314 00:25:12.240230 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.241886 kubelet[2594]: E0314 00:25:12.240257 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.242675 kubelet[2594]: E0314 00:25:12.242496 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.242675 kubelet[2594]: W0314 00:25:12.242521 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.242675 kubelet[2594]: E0314 00:25:12.242543 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.245084 kubelet[2594]: E0314 00:25:12.243600 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.245084 kubelet[2594]: W0314 00:25:12.243619 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.245084 kubelet[2594]: E0314 00:25:12.243639 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.246119 kubelet[2594]: E0314 00:25:12.245742 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.246119 kubelet[2594]: W0314 00:25:12.245762 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.246119 kubelet[2594]: E0314 00:25:12.245783 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.248006 kubelet[2594]: E0314 00:25:12.247816 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.248006 kubelet[2594]: W0314 00:25:12.247834 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.248006 kubelet[2594]: E0314 00:25:12.247852 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.248233 kubelet[2594]: E0314 00:25:12.248218 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.248291 kubelet[2594]: W0314 00:25:12.248233 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.248291 kubelet[2594]: E0314 00:25:12.248252 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.249113 kubelet[2594]: E0314 00:25:12.248611 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.249113 kubelet[2594]: W0314 00:25:12.248630 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.249113 kubelet[2594]: E0314 00:25:12.248652 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.251729 kubelet[2594]: E0314 00:25:12.249327 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.251729 kubelet[2594]: W0314 00:25:12.249348 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.251729 kubelet[2594]: E0314 00:25:12.249366 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.251729 kubelet[2594]: E0314 00:25:12.250048 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.251729 kubelet[2594]: W0314 00:25:12.250064 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.251729 kubelet[2594]: E0314 00:25:12.250081 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.252181 kubelet[2594]: E0314 00:25:12.251935 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.252181 kubelet[2594]: W0314 00:25:12.251953 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.252181 kubelet[2594]: E0314 00:25:12.251972 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.252387 kubelet[2594]: E0314 00:25:12.252365 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.252449 kubelet[2594]: W0314 00:25:12.252387 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.252449 kubelet[2594]: E0314 00:25:12.252414 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.255301 kubelet[2594]: E0314 00:25:12.254832 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.255301 kubelet[2594]: W0314 00:25:12.254851 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.255301 kubelet[2594]: E0314 00:25:12.254868 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.255301 kubelet[2594]: E0314 00:25:12.255252 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.255301 kubelet[2594]: W0314 00:25:12.255266 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.255301 kubelet[2594]: E0314 00:25:12.255284 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.258009 kubelet[2594]: E0314 00:25:12.255810 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.258009 kubelet[2594]: W0314 00:25:12.255829 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.258009 kubelet[2594]: E0314 00:25:12.255850 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.258009 kubelet[2594]: E0314 00:25:12.256284 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.258009 kubelet[2594]: W0314 00:25:12.256296 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.258009 kubelet[2594]: E0314 00:25:12.256313 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.258009 kubelet[2594]: E0314 00:25:12.257437 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.258009 kubelet[2594]: W0314 00:25:12.257452 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.258009 kubelet[2594]: E0314 00:25:12.257470 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.258440 kubelet[2594]: E0314 00:25:12.258400 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.258440 kubelet[2594]: W0314 00:25:12.258416 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.258440 kubelet[2594]: E0314 00:25:12.258432 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.259466 kubelet[2594]: E0314 00:25:12.259346 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.259466 kubelet[2594]: W0314 00:25:12.259432 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.259466 kubelet[2594]: E0314 00:25:12.259470 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.261723 kubelet[2594]: E0314 00:25:12.260589 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.261723 kubelet[2594]: W0314 00:25:12.260614 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.261723 kubelet[2594]: E0314 00:25:12.260633 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.261723 kubelet[2594]: E0314 00:25:12.261672 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.261723 kubelet[2594]: W0314 00:25:12.261689 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.261723 kubelet[2594]: E0314 00:25:12.261723 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.263683 kubelet[2594]: E0314 00:25:12.262397 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.263683 kubelet[2594]: W0314 00:25:12.262418 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.263683 kubelet[2594]: E0314 00:25:12.262435 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.263914 kubelet[2594]: E0314 00:25:12.263820 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.263914 kubelet[2594]: W0314 00:25:12.263836 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.263914 kubelet[2594]: E0314 00:25:12.263857 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.265407 kubelet[2594]: E0314 00:25:12.265371 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.265504 kubelet[2594]: W0314 00:25:12.265414 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.265504 kubelet[2594]: E0314 00:25:12.265433 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.267193 kubelet[2594]: E0314 00:25:12.267165 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.267193 kubelet[2594]: W0314 00:25:12.267191 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.267333 kubelet[2594]: E0314 00:25:12.267209 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.279195 containerd[1462]: time="2026-03-14T00:25:12.278115746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:12.279195 containerd[1462]: time="2026-03-14T00:25:12.278202705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:12.279195 containerd[1462]: time="2026-03-14T00:25:12.278222838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:12.279195 containerd[1462]: time="2026-03-14T00:25:12.278336538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:12.292804 kubelet[2594]: E0314 00:25:12.292633 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:12.292804 kubelet[2594]: W0314 00:25:12.292667 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:12.292804 kubelet[2594]: E0314 00:25:12.292719 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:12.315855 containerd[1462]: time="2026-03-14T00:25:12.315776587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7stzr,Uid:1e8a054b-6b29-4810-ab1a-5ad8cedb899f,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:12.316289 systemd[1]: Started cri-containerd-99451f029c2a3345769f8d60d7cf1f256b94707fb0a18331d7f312d3684f8821.scope - libcontainer container 99451f029c2a3345769f8d60d7cf1f256b94707fb0a18331d7f312d3684f8821. Mar 14 00:25:12.363358 containerd[1462]: time="2026-03-14T00:25:12.362991353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:12.363358 containerd[1462]: time="2026-03-14T00:25:12.363097123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:12.363358 containerd[1462]: time="2026-03-14T00:25:12.363126718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:12.363358 containerd[1462]: time="2026-03-14T00:25:12.363247077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:12.396986 systemd[1]: Started cri-containerd-8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1.scope - libcontainer container 8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1. Mar 14 00:25:12.419520 containerd[1462]: time="2026-03-14T00:25:12.419442064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dffcd5c46-nfxhl,Uid:caa2b884-c1a0-4c42-8775-19d2bb5ba1a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"99451f029c2a3345769f8d60d7cf1f256b94707fb0a18331d7f312d3684f8821\"" Mar 14 00:25:12.424730 containerd[1462]: time="2026-03-14T00:25:12.424196230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:25:12.453751 containerd[1462]: time="2026-03-14T00:25:12.453658201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7stzr,Uid:1e8a054b-6b29-4810-ab1a-5ad8cedb899f,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1\"" Mar 14 00:25:13.584852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771816383.mount: Deactivated successfully. Mar 14 00:25:14.049974 kubelet[2594]: E0314 00:25:14.049900 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:14.561399 containerd[1462]: time="2026-03-14T00:25:14.561337178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:14.562950 containerd[1462]: time="2026-03-14T00:25:14.562872094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 14 00:25:14.567725 containerd[1462]: time="2026-03-14T00:25:14.565984743Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:14.572634 containerd[1462]: time="2026-03-14T00:25:14.572582072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:14.574803 containerd[1462]: time="2026-03-14T00:25:14.574746525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.150478845s" Mar 14 00:25:14.574803 containerd[1462]: time="2026-03-14T00:25:14.574795499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 14 00:25:14.577285 containerd[1462]: time="2026-03-14T00:25:14.576466547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:25:14.602609 containerd[1462]: time="2026-03-14T00:25:14.602535306Z" level=info msg="CreateContainer within sandbox \"99451f029c2a3345769f8d60d7cf1f256b94707fb0a18331d7f312d3684f8821\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:25:14.621654 containerd[1462]: time="2026-03-14T00:25:14.621593304Z" level=info msg="CreateContainer within sandbox \"99451f029c2a3345769f8d60d7cf1f256b94707fb0a18331d7f312d3684f8821\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ed18216704d39834b155a634af57d5e9f6be27fe127b74b1e8cb0182f719dc1e\"" Mar 14 00:25:14.623327 containerd[1462]: time="2026-03-14T00:25:14.622343919Z" level=info msg="StartContainer for \"ed18216704d39834b155a634af57d5e9f6be27fe127b74b1e8cb0182f719dc1e\"" Mar 14 00:25:14.670979 systemd[1]: Started cri-containerd-ed18216704d39834b155a634af57d5e9f6be27fe127b74b1e8cb0182f719dc1e.scope - libcontainer container ed18216704d39834b155a634af57d5e9f6be27fe127b74b1e8cb0182f719dc1e. Mar 14 00:25:14.735871 containerd[1462]: time="2026-03-14T00:25:14.735782669Z" level=info msg="StartContainer for \"ed18216704d39834b155a634af57d5e9f6be27fe127b74b1e8cb0182f719dc1e\" returns successfully" Mar 14 00:25:15.204876 kubelet[2594]: I0314 00:25:15.204774 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-6dffcd5c46-nfxhl" podStartSLOduration=2.052387938 podStartE2EDuration="4.204749377s" podCreationTimestamp="2026-03-14 00:25:11 +0000 UTC" firstStartedPulling="2026-03-14 00:25:12.423526873 +0000 UTC m=+21.589014119" lastFinishedPulling="2026-03-14 00:25:14.575888293 +0000 UTC m=+23.741375558" observedRunningTime="2026-03-14 00:25:15.19898319 +0000 UTC m=+24.364470726" watchObservedRunningTime="2026-03-14 00:25:15.204749377 +0000 UTC m=+24.370236650" Mar 14 00:25:15.253734 kubelet[2594]: E0314 00:25:15.253677 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.253734 kubelet[2594]: W0314 00:25:15.253730 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.254011 kubelet[2594]: E0314 00:25:15.253763 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.254326 kubelet[2594]: E0314 00:25:15.254300 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.254326 kubelet[2594]: W0314 00:25:15.254323 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.254463 kubelet[2594]: E0314 00:25:15.254343 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.256024 kubelet[2594]: E0314 00:25:15.255997 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.256024 kubelet[2594]: W0314 00:25:15.256021 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.256190 kubelet[2594]: E0314 00:25:15.256042 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.256551 kubelet[2594]: E0314 00:25:15.256528 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.256634 kubelet[2594]: W0314 00:25:15.256556 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.256634 kubelet[2594]: E0314 00:25:15.256596 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.257027 kubelet[2594]: E0314 00:25:15.257000 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.257119 kubelet[2594]: W0314 00:25:15.257041 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.257119 kubelet[2594]: E0314 00:25:15.257060 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.257510 kubelet[2594]: E0314 00:25:15.257485 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.257592 kubelet[2594]: W0314 00:25:15.257508 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.257592 kubelet[2594]: E0314 00:25:15.257544 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.259362 kubelet[2594]: E0314 00:25:15.259335 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.259362 kubelet[2594]: W0314 00:25:15.259358 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.259516 kubelet[2594]: E0314 00:25:15.259377 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.259964 kubelet[2594]: E0314 00:25:15.259776 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.259964 kubelet[2594]: W0314 00:25:15.259795 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.259964 kubelet[2594]: E0314 00:25:15.259812 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.260760 kubelet[2594]: E0314 00:25:15.260180 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.260760 kubelet[2594]: W0314 00:25:15.260196 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.260760 kubelet[2594]: E0314 00:25:15.260214 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.261049 kubelet[2594]: E0314 00:25:15.261025 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.261049 kubelet[2594]: W0314 00:25:15.261049 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.261180 kubelet[2594]: E0314 00:25:15.261067 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.261755 kubelet[2594]: E0314 00:25:15.261731 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.261755 kubelet[2594]: W0314 00:25:15.261752 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.261915 kubelet[2594]: E0314 00:25:15.261770 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.264755 kubelet[2594]: E0314 00:25:15.264462 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.264755 kubelet[2594]: W0314 00:25:15.264483 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.264755 kubelet[2594]: E0314 00:25:15.264505 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.265431 kubelet[2594]: E0314 00:25:15.265403 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.265431 kubelet[2594]: W0314 00:25:15.265428 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.265569 kubelet[2594]: E0314 00:25:15.265447 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.268029 kubelet[2594]: E0314 00:25:15.268000 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.268029 kubelet[2594]: W0314 00:25:15.268025 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.268201 kubelet[2594]: E0314 00:25:15.268045 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.268387 kubelet[2594]: E0314 00:25:15.268364 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.268455 kubelet[2594]: W0314 00:25:15.268389 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.268455 kubelet[2594]: E0314 00:25:15.268406 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.278289 kubelet[2594]: E0314 00:25:15.278083 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.278289 kubelet[2594]: W0314 00:25:15.278113 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.278289 kubelet[2594]: E0314 00:25:15.278136 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.278653 kubelet[2594]: E0314 00:25:15.278631 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.278653 kubelet[2594]: W0314 00:25:15.278651 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.278824 kubelet[2594]: E0314 00:25:15.278670 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.279109 kubelet[2594]: E0314 00:25:15.279084 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.279109 kubelet[2594]: W0314 00:25:15.279104 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.279282 kubelet[2594]: E0314 00:25:15.279122 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.279557 kubelet[2594]: E0314 00:25:15.279535 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.279557 kubelet[2594]: W0314 00:25:15.279554 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.279723 kubelet[2594]: E0314 00:25:15.279572 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.280035 kubelet[2594]: E0314 00:25:15.280014 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.280035 kubelet[2594]: W0314 00:25:15.280033 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.280195 kubelet[2594]: E0314 00:25:15.280050 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.280547 kubelet[2594]: E0314 00:25:15.280458 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.280547 kubelet[2594]: W0314 00:25:15.280476 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.280547 kubelet[2594]: E0314 00:25:15.280494 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.280892 kubelet[2594]: E0314 00:25:15.280869 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.280892 kubelet[2594]: W0314 00:25:15.280889 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.281038 kubelet[2594]: E0314 00:25:15.280907 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.281301 kubelet[2594]: E0314 00:25:15.281281 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.281301 kubelet[2594]: W0314 00:25:15.281299 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.281431 kubelet[2594]: E0314 00:25:15.281316 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.281823 kubelet[2594]: E0314 00:25:15.281759 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.281823 kubelet[2594]: W0314 00:25:15.281780 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.281823 kubelet[2594]: E0314 00:25:15.281798 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.282321 kubelet[2594]: E0314 00:25:15.282241 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.282321 kubelet[2594]: W0314 00:25:15.282259 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.282321 kubelet[2594]: E0314 00:25:15.282279 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.282627 kubelet[2594]: E0314 00:25:15.282607 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.282627 kubelet[2594]: W0314 00:25:15.282625 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.282890 kubelet[2594]: E0314 00:25:15.282642 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.283008 kubelet[2594]: E0314 00:25:15.282982 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.283008 kubelet[2594]: W0314 00:25:15.282996 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.283176 kubelet[2594]: E0314 00:25:15.283013 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.283379 kubelet[2594]: E0314 00:25:15.283361 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.283379 kubelet[2594]: W0314 00:25:15.283378 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.283526 kubelet[2594]: E0314 00:25:15.283395 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.283857 kubelet[2594]: E0314 00:25:15.283768 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.283857 kubelet[2594]: W0314 00:25:15.283785 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.283857 kubelet[2594]: E0314 00:25:15.283802 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.284365 kubelet[2594]: E0314 00:25:15.284343 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.284365 kubelet[2594]: W0314 00:25:15.284361 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.284532 kubelet[2594]: E0314 00:25:15.284378 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.284754 kubelet[2594]: E0314 00:25:15.284735 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.284834 kubelet[2594]: W0314 00:25:15.284755 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.284834 kubelet[2594]: E0314 00:25:15.284772 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.285128 kubelet[2594]: E0314 00:25:15.285109 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.285128 kubelet[2594]: W0314 00:25:15.285126 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.285260 kubelet[2594]: E0314 00:25:15.285143 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.285907 kubelet[2594]: E0314 00:25:15.285886 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:25:15.285907 kubelet[2594]: W0314 00:25:15.285904 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:25:15.286031 kubelet[2594]: E0314 00:25:15.285921 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:25:15.679473 containerd[1462]: time="2026-03-14T00:25:15.679403864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:15.680984 containerd[1462]: time="2026-03-14T00:25:15.680909287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 14 00:25:15.682500 containerd[1462]: time="2026-03-14T00:25:15.682431948Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:15.685868 containerd[1462]: time="2026-03-14T00:25:15.685792436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:15.687099 containerd[1462]: time="2026-03-14T00:25:15.686902593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.110388853s" Mar 14 00:25:15.687099 containerd[1462]: time="2026-03-14T00:25:15.686953314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 14 00:25:15.693199 containerd[1462]: time="2026-03-14T00:25:15.692951071Z" level=info msg="CreateContainer within sandbox \"8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:25:15.721495 containerd[1462]: time="2026-03-14T00:25:15.721221571Z" level=info msg="CreateContainer within sandbox \"8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"71079a644cd321cdf512706430ba7535af5c56026f201784bea84cbd9dcf883e\"" Mar 14 00:25:15.722120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604953637.mount: Deactivated successfully. Mar 14 00:25:15.725081 containerd[1462]: time="2026-03-14T00:25:15.724391337Z" level=info msg="StartContainer for \"71079a644cd321cdf512706430ba7535af5c56026f201784bea84cbd9dcf883e\"" Mar 14 00:25:15.780000 systemd[1]: Started cri-containerd-71079a644cd321cdf512706430ba7535af5c56026f201784bea84cbd9dcf883e.scope - libcontainer container 71079a644cd321cdf512706430ba7535af5c56026f201784bea84cbd9dcf883e. Mar 14 00:25:15.838300 containerd[1462]: time="2026-03-14T00:25:15.838228461Z" level=info msg="StartContainer for \"71079a644cd321cdf512706430ba7535af5c56026f201784bea84cbd9dcf883e\" returns successfully" Mar 14 00:25:15.874687 systemd[1]: cri-containerd-71079a644cd321cdf512706430ba7535af5c56026f201784bea84cbd9dcf883e.scope: Deactivated successfully. Mar 14 00:25:15.931031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71079a644cd321cdf512706430ba7535af5c56026f201784bea84cbd9dcf883e-rootfs.mount: Deactivated successfully. Mar 14 00:25:16.049555 kubelet[2594]: E0314 00:25:16.049471 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:16.182348 kubelet[2594]: I0314 00:25:16.181526 2594 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:25:16.822268 containerd[1462]: time="2026-03-14T00:25:16.822054550Z" level=info msg="shim disconnected" id=71079a644cd321cdf512706430ba7535af5c56026f201784bea84cbd9dcf883e namespace=k8s.io Mar 14 00:25:16.822268 containerd[1462]: time="2026-03-14T00:25:16.822142565Z" level=warning msg="cleaning up after shim disconnected" id=71079a644cd321cdf512706430ba7535af5c56026f201784bea84cbd9dcf883e namespace=k8s.io Mar 14 00:25:16.822268 containerd[1462]: time="2026-03-14T00:25:16.822158503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:17.189133 containerd[1462]: time="2026-03-14T00:25:17.187757583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:25:18.049192 kubelet[2594]: E0314 00:25:18.049111 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:20.051204 kubelet[2594]: E0314 00:25:20.049803 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:22.050220 kubelet[2594]: E0314 00:25:22.050125 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:24.050144 kubelet[2594]: E0314 00:25:24.050051 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:24.226555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2140009745.mount: Deactivated successfully. Mar 14 00:25:24.260045 containerd[1462]: time="2026-03-14T00:25:24.259967991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:24.261586 containerd[1462]: time="2026-03-14T00:25:24.261420824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 14 00:25:24.264725 containerd[1462]: time="2026-03-14T00:25:24.262741286Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:24.266503 containerd[1462]: time="2026-03-14T00:25:24.266449901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:24.267563 containerd[1462]: time="2026-03-14T00:25:24.267514302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.079688979s" Mar 14 00:25:24.267685 containerd[1462]: time="2026-03-14T00:25:24.267569580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 14 00:25:24.273630 containerd[1462]: time="2026-03-14T00:25:24.273587274Z" level=info msg="CreateContainer within sandbox \"8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:25:24.301662 containerd[1462]: time="2026-03-14T00:25:24.301527024Z" level=info msg="CreateContainer within sandbox \"8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"1c55bf8bbe4341c71356a3d58c3f3132a1ab0bbad9cdd8e5d7a5d77032aeddbc\"" Mar 14 00:25:24.304394 containerd[1462]: time="2026-03-14T00:25:24.302655764Z" level=info msg="StartContainer for \"1c55bf8bbe4341c71356a3d58c3f3132a1ab0bbad9cdd8e5d7a5d77032aeddbc\"" Mar 14 00:25:24.353996 systemd[1]: Started cri-containerd-1c55bf8bbe4341c71356a3d58c3f3132a1ab0bbad9cdd8e5d7a5d77032aeddbc.scope - libcontainer container 1c55bf8bbe4341c71356a3d58c3f3132a1ab0bbad9cdd8e5d7a5d77032aeddbc. Mar 14 00:25:24.395830 containerd[1462]: time="2026-03-14T00:25:24.395607446Z" level=info msg="StartContainer for \"1c55bf8bbe4341c71356a3d58c3f3132a1ab0bbad9cdd8e5d7a5d77032aeddbc\" returns successfully" Mar 14 00:25:24.460992 systemd[1]: cri-containerd-1c55bf8bbe4341c71356a3d58c3f3132a1ab0bbad9cdd8e5d7a5d77032aeddbc.scope: Deactivated successfully. Mar 14 00:25:25.228501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c55bf8bbe4341c71356a3d58c3f3132a1ab0bbad9cdd8e5d7a5d77032aeddbc-rootfs.mount: Deactivated successfully. Mar 14 00:25:26.049241 kubelet[2594]: E0314 00:25:26.049142 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:26.087993 containerd[1462]: time="2026-03-14T00:25:26.087637372Z" level=info msg="shim disconnected" id=1c55bf8bbe4341c71356a3d58c3f3132a1ab0bbad9cdd8e5d7a5d77032aeddbc namespace=k8s.io Mar 14 00:25:26.087993 containerd[1462]: time="2026-03-14T00:25:26.087730937Z" level=warning msg="cleaning up after shim disconnected" id=1c55bf8bbe4341c71356a3d58c3f3132a1ab0bbad9cdd8e5d7a5d77032aeddbc namespace=k8s.io Mar 14 00:25:26.087993 containerd[1462]: time="2026-03-14T00:25:26.087748875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:26.224625 containerd[1462]: time="2026-03-14T00:25:26.224560412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:25:28.049859 kubelet[2594]: E0314 00:25:28.049350 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:29.478198 containerd[1462]: time="2026-03-14T00:25:29.478075439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:29.479550 containerd[1462]: time="2026-03-14T00:25:29.479493477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 14 00:25:29.480910 containerd[1462]: time="2026-03-14T00:25:29.480830056Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:29.485163 containerd[1462]: time="2026-03-14T00:25:29.484601758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:29.486757 containerd[1462]: time="2026-03-14T00:25:29.485815901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.26119831s" Mar 14 00:25:29.486757 containerd[1462]: time="2026-03-14T00:25:29.485862492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 14 00:25:29.491425 containerd[1462]: time="2026-03-14T00:25:29.491362476Z" level=info msg="CreateContainer within sandbox \"8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:25:29.515045 containerd[1462]: time="2026-03-14T00:25:29.514988567Z" level=info msg="CreateContainer within sandbox \"8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dd6f8994b42ef728d0cb1f7502be5d6972612c3968a6bec2c5a52b90ba84213e\"" Mar 14 00:25:29.519208 containerd[1462]: time="2026-03-14T00:25:29.516014342Z" level=info msg="StartContainer for \"dd6f8994b42ef728d0cb1f7502be5d6972612c3968a6bec2c5a52b90ba84213e\"" Mar 14 00:25:29.518076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649999506.mount: Deactivated successfully. Mar 14 00:25:29.571930 systemd[1]: Started cri-containerd-dd6f8994b42ef728d0cb1f7502be5d6972612c3968a6bec2c5a52b90ba84213e.scope - libcontainer container dd6f8994b42ef728d0cb1f7502be5d6972612c3968a6bec2c5a52b90ba84213e. Mar 14 00:25:29.615384 containerd[1462]: time="2026-03-14T00:25:29.615312060Z" level=info msg="StartContainer for \"dd6f8994b42ef728d0cb1f7502be5d6972612c3968a6bec2c5a52b90ba84213e\" returns successfully" Mar 14 00:25:30.050771 kubelet[2594]: E0314 00:25:30.049892 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:30.155014 kubelet[2594]: I0314 00:25:30.154958 2594 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:25:30.647918 containerd[1462]: time="2026-03-14T00:25:30.647683987Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:25:30.651097 systemd[1]: cri-containerd-dd6f8994b42ef728d0cb1f7502be5d6972612c3968a6bec2c5a52b90ba84213e.scope: Deactivated successfully. Mar 14 00:25:30.687319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd6f8994b42ef728d0cb1f7502be5d6972612c3968a6bec2c5a52b90ba84213e-rootfs.mount: Deactivated successfully. Mar 14 00:25:30.719777 kubelet[2594]: I0314 00:25:30.718856 2594 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:25:31.003138 systemd[1]: Created slice kubepods-burstable-poda9e9eb5c_f4d2_40c6_a693_3d1291777ac5.slice - libcontainer container kubepods-burstable-poda9e9eb5c_f4d2_40c6_a693_3d1291777ac5.slice. Mar 14 00:25:31.040813 systemd[1]: Created slice kubepods-burstable-pod133fdf5a_fd04_49b7_9129_b4e1bf634740.slice - libcontainer container kubepods-burstable-pod133fdf5a_fd04_49b7_9129_b4e1bf634740.slice. Mar 14 00:25:31.096526 kubelet[2594]: I0314 00:25:31.095881 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngccd\" (UniqueName: \"kubernetes.io/projected/a9e9eb5c-f4d2-40c6-a693-3d1291777ac5-kube-api-access-ngccd\") pod \"coredns-7d764666f9-8swjn\" (UID: \"a9e9eb5c-f4d2-40c6-a693-3d1291777ac5\") " pod="kube-system/coredns-7d764666f9-8swjn" Mar 14 00:25:31.096526 kubelet[2594]: I0314 00:25:31.095948 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9e9eb5c-f4d2-40c6-a693-3d1291777ac5-config-volume\") pod \"coredns-7d764666f9-8swjn\" (UID: \"a9e9eb5c-f4d2-40c6-a693-3d1291777ac5\") " pod="kube-system/coredns-7d764666f9-8swjn" Mar 14 00:25:31.096526 kubelet[2594]: I0314 00:25:31.095989 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/133fdf5a-fd04-49b7-9129-b4e1bf634740-config-volume\") pod \"coredns-7d764666f9-q9d7h\" (UID: \"133fdf5a-fd04-49b7-9129-b4e1bf634740\") " pod="kube-system/coredns-7d764666f9-q9d7h" Mar 14 00:25:31.096526 kubelet[2594]: I0314 00:25:31.096022 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wsrv\" (UniqueName: \"kubernetes.io/projected/133fdf5a-fd04-49b7-9129-b4e1bf634740-kube-api-access-7wsrv\") pod \"coredns-7d764666f9-q9d7h\" (UID: \"133fdf5a-fd04-49b7-9129-b4e1bf634740\") " pod="kube-system/coredns-7d764666f9-q9d7h" Mar 14 00:25:31.178659 systemd[1]: Created slice kubepods-besteffort-poda5634a34_65be_48f0_b8ff_53a2b82693c3.slice - libcontainer container kubepods-besteffort-poda5634a34_65be_48f0_b8ff_53a2b82693c3.slice. Mar 14 00:25:31.190268 systemd[1]: Created slice kubepods-besteffort-pod00362120_b53e_4112_9493_f945eb34a049.slice - libcontainer container kubepods-besteffort-pod00362120_b53e_4112_9493_f945eb34a049.slice. Mar 14 00:25:31.196666 kubelet[2594]: I0314 00:25:31.196619 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwl5\" (UniqueName: \"kubernetes.io/projected/a5634a34-65be-48f0-b8ff-53a2b82693c3-kube-api-access-vjwl5\") pod \"whisker-ff48b58b9-qcnlj\" (UID: \"a5634a34-65be-48f0-b8ff-53a2b82693c3\") " pod="calico-system/whisker-ff48b58b9-qcnlj" Mar 14 00:25:31.196883 kubelet[2594]: I0314 00:25:31.196784 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-backend-key-pair\") pod \"whisker-ff48b58b9-qcnlj\" (UID: \"a5634a34-65be-48f0-b8ff-53a2b82693c3\") " pod="calico-system/whisker-ff48b58b9-qcnlj" Mar 14 00:25:31.196883 kubelet[2594]: I0314 00:25:31.196830 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-nginx-config\") pod \"whisker-ff48b58b9-qcnlj\" (UID: \"a5634a34-65be-48f0-b8ff-53a2b82693c3\") " pod="calico-system/whisker-ff48b58b9-qcnlj" Mar 14 00:25:31.196883 kubelet[2594]: I0314 00:25:31.196864 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-ca-bundle\") pod \"whisker-ff48b58b9-qcnlj\" (UID: \"a5634a34-65be-48f0-b8ff-53a2b82693c3\") " pod="calico-system/whisker-ff48b58b9-qcnlj" Mar 14 00:25:31.243368 containerd[1462]: time="2026-03-14T00:25:31.243284862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6w7hq,Uid:00362120-b53e-4112-9493-f945eb34a049,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:31.249971 containerd[1462]: time="2026-03-14T00:25:31.249861937Z" level=info msg="shim disconnected" id=dd6f8994b42ef728d0cb1f7502be5d6972612c3968a6bec2c5a52b90ba84213e namespace=k8s.io Mar 14 00:25:31.251198 containerd[1462]: time="2026-03-14T00:25:31.250907637Z" level=warning msg="cleaning up after shim disconnected" id=dd6f8994b42ef728d0cb1f7502be5d6972612c3968a6bec2c5a52b90ba84213e namespace=k8s.io Mar 14 00:25:31.251198 containerd[1462]: time="2026-03-14T00:25:31.250937017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:31.264051 systemd[1]: Created slice kubepods-besteffort-podc1a7d672_fe4a_4281_9f30_d5ed4679c445.slice - libcontainer container kubepods-besteffort-podc1a7d672_fe4a_4281_9f30_d5ed4679c445.slice. Mar 14 00:25:31.300152 kubelet[2594]: I0314 00:25:31.300102 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e9476b50-628d-4ade-b7da-1ee31561d583-calico-apiserver-certs\") pod \"calico-apiserver-5dbb9f9f6b-fmc6z\" (UID: \"e9476b50-628d-4ade-b7da-1ee31561d583\") " pod="calico-system/calico-apiserver-5dbb9f9f6b-fmc6z" Mar 14 00:25:31.300587 kubelet[2594]: I0314 00:25:31.300159 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sfkf\" (UniqueName: \"kubernetes.io/projected/e9476b50-628d-4ade-b7da-1ee31561d583-kube-api-access-4sfkf\") pod \"calico-apiserver-5dbb9f9f6b-fmc6z\" (UID: \"e9476b50-628d-4ade-b7da-1ee31561d583\") " pod="calico-system/calico-apiserver-5dbb9f9f6b-fmc6z" Mar 14 00:25:31.300587 kubelet[2594]: I0314 00:25:31.300203 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a7711bee-34a0-429d-9db5-924b7445ab4d-goldmane-key-pair\") pod \"goldmane-9f7667bb8-6qx4h\" (UID: \"a7711bee-34a0-429d-9db5-924b7445ab4d\") " pod="calico-system/goldmane-9f7667bb8-6qx4h" Mar 14 00:25:31.300587 kubelet[2594]: I0314 00:25:31.300251 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7711bee-34a0-429d-9db5-924b7445ab4d-config\") pod \"goldmane-9f7667bb8-6qx4h\" (UID: \"a7711bee-34a0-429d-9db5-924b7445ab4d\") " pod="calico-system/goldmane-9f7667bb8-6qx4h" Mar 14 00:25:31.300587 kubelet[2594]: I0314 00:25:31.300301 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1a7d672-fe4a-4281-9f30-d5ed4679c445-calico-apiserver-certs\") pod \"calico-apiserver-5dbb9f9f6b-snvzj\" (UID: \"c1a7d672-fe4a-4281-9f30-d5ed4679c445\") " pod="calico-system/calico-apiserver-5dbb9f9f6b-snvzj" Mar 14 00:25:31.300587 kubelet[2594]: I0314 00:25:31.300334 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2sz\" (UniqueName: \"kubernetes.io/projected/a7711bee-34a0-429d-9db5-924b7445ab4d-kube-api-access-vt2sz\") pod \"goldmane-9f7667bb8-6qx4h\" (UID: \"a7711bee-34a0-429d-9db5-924b7445ab4d\") " pod="calico-system/goldmane-9f7667bb8-6qx4h" Mar 14 00:25:31.300929 kubelet[2594]: I0314 00:25:31.300414 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7711bee-34a0-429d-9db5-924b7445ab4d-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-6qx4h\" (UID: \"a7711bee-34a0-429d-9db5-924b7445ab4d\") " pod="calico-system/goldmane-9f7667bb8-6qx4h" Mar 14 00:25:31.302540 kubelet[2594]: I0314 00:25:31.301076 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rts4w\" (UniqueName: \"kubernetes.io/projected/c1a7d672-fe4a-4281-9f30-d5ed4679c445-kube-api-access-rts4w\") pod \"calico-apiserver-5dbb9f9f6b-snvzj\" (UID: \"c1a7d672-fe4a-4281-9f30-d5ed4679c445\") " pod="calico-system/calico-apiserver-5dbb9f9f6b-snvzj" Mar 14 00:25:31.323598 systemd[1]: Created slice kubepods-besteffort-pode9476b50_628d_4ade_b7da_1ee31561d583.slice - libcontainer container kubepods-besteffort-pode9476b50_628d_4ade_b7da_1ee31561d583.slice. Mar 14 00:25:31.348143 containerd[1462]: time="2026-03-14T00:25:31.347341742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8swjn,Uid:a9e9eb5c-f4d2-40c6-a693-3d1291777ac5,Namespace:kube-system,Attempt:0,}" Mar 14 00:25:31.353568 containerd[1462]: time="2026-03-14T00:25:31.353032376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-q9d7h,Uid:133fdf5a-fd04-49b7-9129-b4e1bf634740,Namespace:kube-system,Attempt:0,}" Mar 14 00:25:31.368684 systemd[1]: Created slice kubepods-besteffort-poda7711bee_34a0_429d_9db5_924b7445ab4d.slice - libcontainer container kubepods-besteffort-poda7711bee_34a0_429d_9db5_924b7445ab4d.slice. Mar 14 00:25:31.397964 systemd[1]: Created slice kubepods-besteffort-podb65e270e_651d_4115_b14d_9b8e312de715.slice - libcontainer container kubepods-besteffort-podb65e270e_651d_4115_b14d_9b8e312de715.slice. Mar 14 00:25:31.402751 kubelet[2594]: I0314 00:25:31.402453 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4cr\" (UniqueName: \"kubernetes.io/projected/b65e270e-651d-4115-b14d-9b8e312de715-kube-api-access-ww4cr\") pod \"calico-kube-controllers-559ddfd44c-rtx2t\" (UID: \"b65e270e-651d-4115-b14d-9b8e312de715\") " pod="calico-system/calico-kube-controllers-559ddfd44c-rtx2t" Mar 14 00:25:31.402751 kubelet[2594]: I0314 00:25:31.402535 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b65e270e-651d-4115-b14d-9b8e312de715-tigera-ca-bundle\") pod \"calico-kube-controllers-559ddfd44c-rtx2t\" (UID: \"b65e270e-651d-4115-b14d-9b8e312de715\") " pod="calico-system/calico-kube-controllers-559ddfd44c-rtx2t" Mar 14 00:25:31.490715 containerd[1462]: time="2026-03-14T00:25:31.490392987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ff48b58b9-qcnlj,Uid:a5634a34-65be-48f0-b8ff-53a2b82693c3,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:31.606945 containerd[1462]: time="2026-03-14T00:25:31.606775473Z" level=error msg="Failed to destroy network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.608594 containerd[1462]: time="2026-03-14T00:25:31.608349884Z" level=error msg="encountered an error cleaning up failed sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.608594 containerd[1462]: time="2026-03-14T00:25:31.608540126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6w7hq,Uid:00362120-b53e-4112-9493-f945eb34a049,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.609730 kubelet[2594]: E0314 00:25:31.609392 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.609730 kubelet[2594]: E0314 00:25:31.609477 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6w7hq" Mar 14 00:25:31.609730 kubelet[2594]: E0314 00:25:31.609508 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6w7hq" Mar 14 00:25:31.609964 kubelet[2594]: E0314 00:25:31.609577 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6w7hq_calico-system(00362120-b53e-4112-9493-f945eb34a049)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6w7hq_calico-system(00362120-b53e-4112-9493-f945eb34a049)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:31.615847 containerd[1462]: time="2026-03-14T00:25:31.615792198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb9f9f6b-snvzj,Uid:c1a7d672-fe4a-4281-9f30-d5ed4679c445,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:31.649104 containerd[1462]: time="2026-03-14T00:25:31.648936076Z" level=error msg="Failed to destroy network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.650434 containerd[1462]: time="2026-03-14T00:25:31.650036767Z" level=error msg="encountered an error cleaning up failed sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.650434 containerd[1462]: time="2026-03-14T00:25:31.650122730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8swjn,Uid:a9e9eb5c-f4d2-40c6-a693-3d1291777ac5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.651478 kubelet[2594]: E0314 00:25:31.650627 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.651478 kubelet[2594]: E0314 00:25:31.650790 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8swjn" Mar 14 00:25:31.651478 kubelet[2594]: E0314 00:25:31.650828 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8swjn" Mar 14 00:25:31.652014 kubelet[2594]: E0314 00:25:31.650975 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-8swjn_kube-system(a9e9eb5c-f4d2-40c6-a693-3d1291777ac5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-8swjn_kube-system(a9e9eb5c-f4d2-40c6-a693-3d1291777ac5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-8swjn" podUID="a9e9eb5c-f4d2-40c6-a693-3d1291777ac5" Mar 14 00:25:31.663154 containerd[1462]: time="2026-03-14T00:25:31.660959862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb9f9f6b-fmc6z,Uid:e9476b50-628d-4ade-b7da-1ee31561d583,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:31.676607 containerd[1462]: time="2026-03-14T00:25:31.675749338Z" level=error msg="Failed to destroy network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.677215 containerd[1462]: time="2026-03-14T00:25:31.676878399Z" level=error msg="encountered an error cleaning up failed sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.677355 containerd[1462]: time="2026-03-14T00:25:31.677247606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-q9d7h,Uid:133fdf5a-fd04-49b7-9129-b4e1bf634740,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.678323 kubelet[2594]: E0314 00:25:31.678269 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.678565 kubelet[2594]: E0314 00:25:31.678513 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-q9d7h" Mar 14 00:25:31.678784 kubelet[2594]: E0314 00:25:31.678749 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-q9d7h" Mar 14 00:25:31.679086 kubelet[2594]: E0314 00:25:31.679013 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-q9d7h_kube-system(133fdf5a-fd04-49b7-9129-b4e1bf634740)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-q9d7h_kube-system(133fdf5a-fd04-49b7-9129-b4e1bf634740)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-q9d7h" podUID="133fdf5a-fd04-49b7-9129-b4e1bf634740" Mar 14 00:25:31.704096 containerd[1462]: time="2026-03-14T00:25:31.704028440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6qx4h,Uid:a7711bee-34a0-429d-9db5-924b7445ab4d,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:31.724197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2-shm.mount: Deactivated successfully. Mar 14 00:25:31.732108 containerd[1462]: time="2026-03-14T00:25:31.730107726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559ddfd44c-rtx2t,Uid:b65e270e-651d-4115-b14d-9b8e312de715,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:31.837386 containerd[1462]: time="2026-03-14T00:25:31.837074844Z" level=error msg="Failed to destroy network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.840858 containerd[1462]: time="2026-03-14T00:25:31.839780353Z" level=error msg="encountered an error cleaning up failed sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.840858 containerd[1462]: time="2026-03-14T00:25:31.839886949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ff48b58b9-qcnlj,Uid:a5634a34-65be-48f0-b8ff-53a2b82693c3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.841067 kubelet[2594]: E0314 00:25:31.840751 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.841067 kubelet[2594]: E0314 00:25:31.840843 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ff48b58b9-qcnlj" Mar 14 00:25:31.841067 kubelet[2594]: E0314 00:25:31.840879 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ff48b58b9-qcnlj" Mar 14 00:25:31.841233 kubelet[2594]: E0314 00:25:31.840970 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-ff48b58b9-qcnlj_calico-system(a5634a34-65be-48f0-b8ff-53a2b82693c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-ff48b58b9-qcnlj_calico-system(a5634a34-65be-48f0-b8ff-53a2b82693c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-ff48b58b9-qcnlj" podUID="a5634a34-65be-48f0-b8ff-53a2b82693c3" Mar 14 00:25:31.963555 containerd[1462]: time="2026-03-14T00:25:31.963269102Z" level=error msg="Failed to destroy network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.966254 containerd[1462]: time="2026-03-14T00:25:31.965986903Z" level=error msg="encountered an error cleaning up failed sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.966254 containerd[1462]: time="2026-03-14T00:25:31.966110528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb9f9f6b-snvzj,Uid:c1a7d672-fe4a-4281-9f30-d5ed4679c445,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.966816 kubelet[2594]: E0314 00:25:31.966422 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.966816 kubelet[2594]: E0314 00:25:31.966506 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5dbb9f9f6b-snvzj" Mar 14 00:25:31.966816 kubelet[2594]: E0314 00:25:31.966536 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5dbb9f9f6b-snvzj" Mar 14 00:25:31.967339 kubelet[2594]: E0314 00:25:31.966611 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dbb9f9f6b-snvzj_calico-system(c1a7d672-fe4a-4281-9f30-d5ed4679c445)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dbb9f9f6b-snvzj_calico-system(c1a7d672-fe4a-4281-9f30-d5ed4679c445)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5dbb9f9f6b-snvzj" podUID="c1a7d672-fe4a-4281-9f30-d5ed4679c445" Mar 14 00:25:31.988103 containerd[1462]: time="2026-03-14T00:25:31.988022535Z" level=error msg="Failed to destroy network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.988762 containerd[1462]: time="2026-03-14T00:25:31.988585665Z" level=error msg="encountered an error cleaning up failed sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.988762 containerd[1462]: time="2026-03-14T00:25:31.988717912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb9f9f6b-fmc6z,Uid:e9476b50-628d-4ade-b7da-1ee31561d583,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.989138 kubelet[2594]: E0314 00:25:31.989077 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:31.989385 kubelet[2594]: E0314 00:25:31.989149 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5dbb9f9f6b-fmc6z" Mar 14 00:25:31.989385 kubelet[2594]: E0314 00:25:31.989186 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5dbb9f9f6b-fmc6z" Mar 14 00:25:31.989385 kubelet[2594]: E0314 00:25:31.989267 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dbb9f9f6b-fmc6z_calico-system(e9476b50-628d-4ade-b7da-1ee31561d583)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dbb9f9f6b-fmc6z_calico-system(e9476b50-628d-4ade-b7da-1ee31561d583)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5dbb9f9f6b-fmc6z" podUID="e9476b50-628d-4ade-b7da-1ee31561d583" Mar 14 00:25:32.010228 containerd[1462]: time="2026-03-14T00:25:32.009972964Z" level=error msg="Failed to destroy network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.010603 containerd[1462]: time="2026-03-14T00:25:32.010474270Z" level=error msg="Failed to destroy network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.011324 containerd[1462]: time="2026-03-14T00:25:32.010508150Z" level=error msg="encountered an error cleaning up failed sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.011324 containerd[1462]: time="2026-03-14T00:25:32.010768647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559ddfd44c-rtx2t,Uid:b65e270e-651d-4115-b14d-9b8e312de715,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.011324 containerd[1462]: time="2026-03-14T00:25:32.010980943Z" level=error msg="encountered an error cleaning up failed sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.011324 containerd[1462]: time="2026-03-14T00:25:32.011065888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6qx4h,Uid:a7711bee-34a0-429d-9db5-924b7445ab4d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.011819 kubelet[2594]: E0314 00:25:32.011176 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.011819 kubelet[2594]: E0314 00:25:32.011278 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-559ddfd44c-rtx2t" Mar 14 00:25:32.011819 kubelet[2594]: E0314 00:25:32.011309 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-559ddfd44c-rtx2t" Mar 14 00:25:32.012305 kubelet[2594]: E0314 00:25:32.012197 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-559ddfd44c-rtx2t_calico-system(b65e270e-651d-4115-b14d-9b8e312de715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-559ddfd44c-rtx2t_calico-system(b65e270e-651d-4115-b14d-9b8e312de715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-559ddfd44c-rtx2t" podUID="b65e270e-651d-4115-b14d-9b8e312de715" Mar 14 00:25:32.012883 kubelet[2594]: E0314 00:25:32.012814 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.013221 kubelet[2594]: E0314 00:25:32.012990 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-6qx4h" Mar 14 00:25:32.013221 kubelet[2594]: E0314 00:25:32.013138 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-6qx4h" Mar 14 00:25:32.013510 kubelet[2594]: E0314 00:25:32.013411 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-6qx4h_calico-system(a7711bee-34a0-429d-9db5-924b7445ab4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-6qx4h_calico-system(a7711bee-34a0-429d-9db5-924b7445ab4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-6qx4h" podUID="a7711bee-34a0-429d-9db5-924b7445ab4d" Mar 14 00:25:32.259393 kubelet[2594]: I0314 00:25:32.259134 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:32.265257 containerd[1462]: time="2026-03-14T00:25:32.261081483Z" level=info msg="StopPodSandbox for \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\"" Mar 14 00:25:32.265257 containerd[1462]: time="2026-03-14T00:25:32.261339800Z" level=info msg="Ensure that sandbox 02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027 in task-service has been cleanup successfully" Mar 14 00:25:32.265504 kubelet[2594]: I0314 00:25:32.264607 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:32.266637 containerd[1462]: time="2026-03-14T00:25:32.266599149Z" level=info msg="StopPodSandbox for \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\"" Mar 14 00:25:32.267531 containerd[1462]: time="2026-03-14T00:25:32.267374942Z" level=info msg="Ensure that sandbox 22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca in task-service has been cleanup successfully" Mar 14 00:25:32.270728 kubelet[2594]: I0314 00:25:32.270434 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:32.274838 containerd[1462]: time="2026-03-14T00:25:32.274495402Z" level=info msg="StopPodSandbox for \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\"" Mar 14 00:25:32.276993 containerd[1462]: time="2026-03-14T00:25:32.276931550Z" level=info msg="Ensure that sandbox bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6 in task-service has been cleanup successfully" Mar 14 00:25:32.283816 kubelet[2594]: I0314 00:25:32.282550 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:32.287893 containerd[1462]: time="2026-03-14T00:25:32.287848023Z" level=info msg="StopPodSandbox for \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\"" Mar 14 00:25:32.288457 containerd[1462]: time="2026-03-14T00:25:32.288419412Z" level=info msg="Ensure that sandbox ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b in task-service has been cleanup successfully" Mar 14 00:25:32.300117 kubelet[2594]: I0314 00:25:32.300068 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:32.303722 containerd[1462]: time="2026-03-14T00:25:32.303423819Z" level=info msg="CreateContainer within sandbox \"8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:25:32.306622 containerd[1462]: time="2026-03-14T00:25:32.306156622Z" level=info msg="StopPodSandbox for \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\"" Mar 14 00:25:32.313086 containerd[1462]: time="2026-03-14T00:25:32.312568109Z" level=info msg="Ensure that sandbox 310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2 in task-service has been cleanup successfully" Mar 14 00:25:32.325463 kubelet[2594]: I0314 00:25:32.325116 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:32.327919 containerd[1462]: time="2026-03-14T00:25:32.327869515Z" level=info msg="StopPodSandbox for \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\"" Mar 14 00:25:32.330729 containerd[1462]: time="2026-03-14T00:25:32.328110618Z" level=info msg="Ensure that sandbox ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a in task-service has been cleanup successfully" Mar 14 00:25:32.333324 kubelet[2594]: I0314 00:25:32.333285 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:32.337811 containerd[1462]: time="2026-03-14T00:25:32.337763429Z" level=info msg="StopPodSandbox for \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\"" Mar 14 00:25:32.338113 containerd[1462]: time="2026-03-14T00:25:32.338080281Z" level=info msg="Ensure that sandbox 1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca in task-service has been cleanup successfully" Mar 14 00:25:32.347070 kubelet[2594]: I0314 00:25:32.347033 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:32.349037 containerd[1462]: time="2026-03-14T00:25:32.348874039Z" level=info msg="StopPodSandbox for \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\"" Mar 14 00:25:32.350019 containerd[1462]: time="2026-03-14T00:25:32.349610392Z" level=info msg="Ensure that sandbox 66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4 in task-service has been cleanup successfully" Mar 14 00:25:32.429075 containerd[1462]: time="2026-03-14T00:25:32.429007804Z" level=info msg="CreateContainer within sandbox \"8f172c78f8aca692fdacd42a5eb6d5422a7bcd433f8efbf1e91fb19f508750d1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1ad18bb898095fe1c5c1f5c3daa0b3c153d67836d8f696cbfd11f9244f4d0016\"" Mar 14 00:25:32.432888 containerd[1462]: time="2026-03-14T00:25:32.432811336Z" level=info msg="StartContainer for \"1ad18bb898095fe1c5c1f5c3daa0b3c153d67836d8f696cbfd11f9244f4d0016\"" Mar 14 00:25:32.513063 containerd[1462]: time="2026-03-14T00:25:32.510974545Z" level=error msg="StopPodSandbox for \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\" failed" error="failed to destroy network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.513216 kubelet[2594]: E0314 00:25:32.511729 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:32.513216 kubelet[2594]: E0314 00:25:32.511798 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca"} Mar 14 00:25:32.513216 kubelet[2594]: E0314 00:25:32.511878 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5634a34-65be-48f0-b8ff-53a2b82693c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:32.513216 kubelet[2594]: E0314 00:25:32.511922 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5634a34-65be-48f0-b8ff-53a2b82693c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-ff48b58b9-qcnlj" podUID="a5634a34-65be-48f0-b8ff-53a2b82693c3" Mar 14 00:25:32.547209 containerd[1462]: time="2026-03-14T00:25:32.547096080Z" level=error msg="StopPodSandbox for \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\" failed" error="failed to destroy network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.547622 kubelet[2594]: E0314 00:25:32.547564 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:32.547940 kubelet[2594]: E0314 00:25:32.547771 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027"} Mar 14 00:25:32.548049 kubelet[2594]: E0314 00:25:32.548001 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7711bee-34a0-429d-9db5-924b7445ab4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:32.548827 kubelet[2594]: E0314 00:25:32.548050 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7711bee-34a0-429d-9db5-924b7445ab4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-6qx4h" podUID="a7711bee-34a0-429d-9db5-924b7445ab4d" Mar 14 00:25:32.566847 containerd[1462]: time="2026-03-14T00:25:32.566031397Z" level=error msg="StopPodSandbox for \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\" failed" error="failed to destroy network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.567769 kubelet[2594]: E0314 00:25:32.566471 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:32.567769 kubelet[2594]: E0314 00:25:32.566533 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b"} Mar 14 00:25:32.567769 kubelet[2594]: E0314 00:25:32.566580 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9e9eb5c-f4d2-40c6-a693-3d1291777ac5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:32.567769 kubelet[2594]: E0314 00:25:32.566633 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9e9eb5c-f4d2-40c6-a693-3d1291777ac5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-8swjn" podUID="a9e9eb5c-f4d2-40c6-a693-3d1291777ac5" Mar 14 00:25:32.585425 containerd[1462]: time="2026-03-14T00:25:32.585365526Z" level=error msg="StopPodSandbox for \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\" failed" error="failed to destroy network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.586185 kubelet[2594]: E0314 00:25:32.586116 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:32.586432 kubelet[2594]: E0314 00:25:32.586393 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a"} Mar 14 00:25:32.586767 kubelet[2594]: E0314 00:25:32.586622 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9476b50-628d-4ade-b7da-1ee31561d583\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:32.587056 kubelet[2594]: E0314 00:25:32.586992 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9476b50-628d-4ade-b7da-1ee31561d583\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5dbb9f9f6b-fmc6z" podUID="e9476b50-628d-4ade-b7da-1ee31561d583" Mar 14 00:25:32.591762 containerd[1462]: time="2026-03-14T00:25:32.590839039Z" level=error msg="StopPodSandbox for \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\" failed" error="failed to destroy network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.591762 containerd[1462]: time="2026-03-14T00:25:32.591443107Z" level=error msg="StopPodSandbox for \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\" failed" error="failed to destroy network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.591936 kubelet[2594]: E0314 00:25:32.591142 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:32.591936 kubelet[2594]: E0314 00:25:32.591194 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca"} Mar 14 00:25:32.591936 kubelet[2594]: E0314 00:25:32.591235 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1a7d672-fe4a-4281-9f30-d5ed4679c445\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:32.591936 kubelet[2594]: E0314 00:25:32.591296 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1a7d672-fe4a-4281-9f30-d5ed4679c445\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5dbb9f9f6b-snvzj" podUID="c1a7d672-fe4a-4281-9f30-d5ed4679c445" Mar 14 00:25:32.592578 kubelet[2594]: E0314 00:25:32.592400 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:32.592578 kubelet[2594]: E0314 00:25:32.592449 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6"} Mar 14 00:25:32.592578 kubelet[2594]: E0314 00:25:32.592491 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"133fdf5a-fd04-49b7-9129-b4e1bf634740\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:32.592578 kubelet[2594]: E0314 00:25:32.592527 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"133fdf5a-fd04-49b7-9129-b4e1bf634740\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-q9d7h" podUID="133fdf5a-fd04-49b7-9129-b4e1bf634740" Mar 14 00:25:32.594945 systemd[1]: Started cri-containerd-1ad18bb898095fe1c5c1f5c3daa0b3c153d67836d8f696cbfd11f9244f4d0016.scope - libcontainer container 1ad18bb898095fe1c5c1f5c3daa0b3c153d67836d8f696cbfd11f9244f4d0016. Mar 14 00:25:32.613923 containerd[1462]: time="2026-03-14T00:25:32.613560017Z" level=error msg="StopPodSandbox for \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\" failed" error="failed to destroy network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.614815 containerd[1462]: time="2026-03-14T00:25:32.614528747Z" level=error msg="StopPodSandbox for \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\" failed" error="failed to destroy network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:32.615129 kubelet[2594]: E0314 00:25:32.614623 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:32.615129 kubelet[2594]: E0314 00:25:32.614747 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4"} Mar 14 00:25:32.615637 kubelet[2594]: E0314 00:25:32.615300 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b65e270e-651d-4115-b14d-9b8e312de715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:32.615637 kubelet[2594]: E0314 00:25:32.615484 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b65e270e-651d-4115-b14d-9b8e312de715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-559ddfd44c-rtx2t" podUID="b65e270e-651d-4115-b14d-9b8e312de715" Mar 14 00:25:32.616195 kubelet[2594]: E0314 00:25:32.615776 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:32.616498 kubelet[2594]: E0314 00:25:32.616051 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2"} Mar 14 00:25:32.616498 kubelet[2594]: E0314 00:25:32.616383 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00362120-b53e-4112-9493-f945eb34a049\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:32.616498 kubelet[2594]: E0314 00:25:32.616423 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00362120-b53e-4112-9493-f945eb34a049\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6w7hq" podUID="00362120-b53e-4112-9493-f945eb34a049" Mar 14 00:25:32.650243 containerd[1462]: time="2026-03-14T00:25:32.650176484Z" level=info msg="StartContainer for \"1ad18bb898095fe1c5c1f5c3daa0b3c153d67836d8f696cbfd11f9244f4d0016\" returns successfully" Mar 14 00:25:32.692893 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027-shm.mount: Deactivated successfully. Mar 14 00:25:32.693042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4-shm.mount: Deactivated successfully. Mar 14 00:25:32.693150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a-shm.mount: Deactivated successfully. Mar 14 00:25:32.693250 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca-shm.mount: Deactivated successfully. Mar 14 00:25:32.693346 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca-shm.mount: Deactivated successfully. Mar 14 00:25:33.354551 containerd[1462]: time="2026-03-14T00:25:33.354486053Z" level=info msg="StopPodSandbox for \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\"" Mar 14 00:25:33.406442 kubelet[2594]: I0314 00:25:33.406267 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-7stzr" podStartSLOduration=2.605640532 podStartE2EDuration="22.406210051s" podCreationTimestamp="2026-03-14 00:25:11 +0000 UTC" firstStartedPulling="2026-03-14 00:25:12.45646777 +0000 UTC m=+21.621955021" lastFinishedPulling="2026-03-14 00:25:32.257037274 +0000 UTC m=+41.422524540" observedRunningTime="2026-03-14 00:25:33.40538481 +0000 UTC m=+42.570872085" watchObservedRunningTime="2026-03-14 00:25:33.406210051 +0000 UTC m=+42.571697324" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.442 [INFO][3847] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.442 [INFO][3847] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" iface="eth0" netns="/var/run/netns/cni-51eff527-0a94-4f7c-9c04-3fad6a1fb361" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.443 [INFO][3847] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" iface="eth0" netns="/var/run/netns/cni-51eff527-0a94-4f7c-9c04-3fad6a1fb361" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.443 [INFO][3847] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" iface="eth0" netns="/var/run/netns/cni-51eff527-0a94-4f7c-9c04-3fad6a1fb361" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.443 [INFO][3847] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.443 [INFO][3847] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.472 [INFO][3854] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" HandleID="k8s-pod-network.22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.472 [INFO][3854] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.472 [INFO][3854] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.482 [WARNING][3854] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" HandleID="k8s-pod-network.22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.483 [INFO][3854] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" HandleID="k8s-pod-network.22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.485 [INFO][3854] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:33.490297 containerd[1462]: 2026-03-14 00:25:33.488 [INFO][3847] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:33.492579 containerd[1462]: time="2026-03-14T00:25:33.491435422Z" level=info msg="TearDown network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\" successfully" Mar 14 00:25:33.492579 containerd[1462]: time="2026-03-14T00:25:33.491482411Z" level=info msg="StopPodSandbox for \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\" returns successfully" Mar 14 00:25:33.496389 systemd[1]: run-netns-cni\x2d51eff527\x2d0a94\x2d4f7c\x2d9c04\x2d3fad6a1fb361.mount: Deactivated successfully. Mar 14 00:25:33.518729 kubelet[2594]: I0314 00:25:33.518531 2594 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-nginx-config\" (UniqueName: \"kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-nginx-config\") pod \"a5634a34-65be-48f0-b8ff-53a2b82693c3\" (UID: \"a5634a34-65be-48f0-b8ff-53a2b82693c3\") " Mar 14 00:25:33.518729 kubelet[2594]: I0314 00:25:33.518618 2594 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/a5634a34-65be-48f0-b8ff-53a2b82693c3-kube-api-access-vjwl5\" (UniqueName: \"kubernetes.io/projected/a5634a34-65be-48f0-b8ff-53a2b82693c3-kube-api-access-vjwl5\") pod \"a5634a34-65be-48f0-b8ff-53a2b82693c3\" (UID: \"a5634a34-65be-48f0-b8ff-53a2b82693c3\") " Mar 14 00:25:33.518729 kubelet[2594]: I0314 00:25:33.518653 2594 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-backend-key-pair\") pod \"a5634a34-65be-48f0-b8ff-53a2b82693c3\" (UID: \"a5634a34-65be-48f0-b8ff-53a2b82693c3\") " Mar 14 00:25:33.518729 kubelet[2594]: I0314 00:25:33.518691 2594 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-ca-bundle\") pod \"a5634a34-65be-48f0-b8ff-53a2b82693c3\" (UID: \"a5634a34-65be-48f0-b8ff-53a2b82693c3\") " Mar 14 00:25:33.519881 kubelet[2594]: I0314 00:25:33.519550 2594 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-nginx-config" pod "a5634a34-65be-48f0-b8ff-53a2b82693c3" (UID: "a5634a34-65be-48f0-b8ff-53a2b82693c3"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:25:33.520108 kubelet[2594]: I0314 00:25:33.520033 2594 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-ca-bundle" pod "a5634a34-65be-48f0-b8ff-53a2b82693c3" (UID: "a5634a34-65be-48f0-b8ff-53a2b82693c3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:25:33.524618 kubelet[2594]: I0314 00:25:33.524536 2594 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-backend-key-pair" pod "a5634a34-65be-48f0-b8ff-53a2b82693c3" (UID: "a5634a34-65be-48f0-b8ff-53a2b82693c3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:25:33.526887 kubelet[2594]: I0314 00:25:33.526829 2594 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5634a34-65be-48f0-b8ff-53a2b82693c3-kube-api-access-vjwl5" pod "a5634a34-65be-48f0-b8ff-53a2b82693c3" (UID: "a5634a34-65be-48f0-b8ff-53a2b82693c3"). InnerVolumeSpecName "kube-api-access-vjwl5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:25:33.528309 systemd[1]: var-lib-kubelet-pods-a5634a34\x2d65be\x2d48f0\x2db8ff\x2d53a2b82693c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvjwl5.mount: Deactivated successfully. Mar 14 00:25:33.528648 systemd[1]: var-lib-kubelet-pods-a5634a34\x2d65be\x2d48f0\x2db8ff\x2d53a2b82693c3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:25:33.619317 kubelet[2594]: I0314 00:25:33.619139 2594 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vjwl5\" (UniqueName: \"kubernetes.io/projected/a5634a34-65be-48f0-b8ff-53a2b82693c3-kube-api-access-vjwl5\") on node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" DevicePath \"\"" Mar 14 00:25:33.619317 kubelet[2594]: I0314 00:25:33.619194 2594 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" DevicePath \"\"" Mar 14 00:25:33.619317 kubelet[2594]: I0314 00:25:33.619213 2594 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" DevicePath \"\"" Mar 14 00:25:33.619317 kubelet[2594]: I0314 00:25:33.619230 2594 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a5634a34-65be-48f0-b8ff-53a2b82693c3-nginx-config\") on node \"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" DevicePath \"\"" Mar 14 00:25:34.374306 systemd[1]: Removed slice kubepods-besteffort-poda5634a34_65be_48f0_b8ff_53a2b82693c3.slice - libcontainer container kubepods-besteffort-poda5634a34_65be_48f0_b8ff_53a2b82693c3.slice. Mar 14 00:25:34.477349 systemd[1]: Created slice kubepods-besteffort-podcc173819_151b_44db_8c92_0c85b31e0b5d.slice - libcontainer container kubepods-besteffort-podcc173819_151b_44db_8c92_0c85b31e0b5d.slice. Mar 14 00:25:34.482874 kubelet[2594]: E0314 00:25:34.482545 2594 status_manager.go:1045] "Failed to get status for pod" err="pods \"whisker-799f4d5486-jhj2l\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' and this object" podUID="cc173819-151b-44db-8c92-0c85b31e0b5d" pod="calico-system/whisker-799f4d5486-jhj2l" Mar 14 00:25:34.624613 kubelet[2594]: I0314 00:25:34.624514 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cc173819-151b-44db-8c92-0c85b31e0b5d-nginx-config\") pod \"whisker-799f4d5486-jhj2l\" (UID: \"cc173819-151b-44db-8c92-0c85b31e0b5d\") " pod="calico-system/whisker-799f4d5486-jhj2l" Mar 14 00:25:34.624865 kubelet[2594]: I0314 00:25:34.624649 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4wrt\" (UniqueName: \"kubernetes.io/projected/cc173819-151b-44db-8c92-0c85b31e0b5d-kube-api-access-t4wrt\") pod \"whisker-799f4d5486-jhj2l\" (UID: \"cc173819-151b-44db-8c92-0c85b31e0b5d\") " pod="calico-system/whisker-799f4d5486-jhj2l" Mar 14 00:25:34.624865 kubelet[2594]: I0314 00:25:34.624731 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc173819-151b-44db-8c92-0c85b31e0b5d-whisker-ca-bundle\") pod \"whisker-799f4d5486-jhj2l\" (UID: \"cc173819-151b-44db-8c92-0c85b31e0b5d\") " pod="calico-system/whisker-799f4d5486-jhj2l" Mar 14 00:25:34.624865 kubelet[2594]: I0314 00:25:34.624772 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc173819-151b-44db-8c92-0c85b31e0b5d-whisker-backend-key-pair\") pod \"whisker-799f4d5486-jhj2l\" (UID: \"cc173819-151b-44db-8c92-0c85b31e0b5d\") " pod="calico-system/whisker-799f4d5486-jhj2l" Mar 14 00:25:34.786485 containerd[1462]: time="2026-03-14T00:25:34.786422479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-799f4d5486-jhj2l,Uid:cc173819-151b-44db-8c92-0c85b31e0b5d,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:35.000740 kernel: calico-node[3938]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:25:35.059389 kubelet[2594]: I0314 00:25:35.056334 2594 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a5634a34-65be-48f0-b8ff-53a2b82693c3" path="/var/lib/kubelet/pods/a5634a34-65be-48f0-b8ff-53a2b82693c3/volumes" Mar 14 00:25:35.066072 systemd-networkd[1368]: cali9d63ed9cf20: Link UP Mar 14 00:25:35.069058 systemd-networkd[1368]: cali9d63ed9cf20: Gained carrier Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.879 [ERROR][3963] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.905 [INFO][3963] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0 whisker-799f4d5486- calico-system cc173819-151b-44db-8c92-0c85b31e0b5d 907 0 2026-03-14 00:25:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:799f4d5486 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da whisker-799f4d5486-jhj2l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9d63ed9cf20 [] [] }} ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Namespace="calico-system" Pod="whisker-799f4d5486-jhj2l" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.906 [INFO][3963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Namespace="calico-system" Pod="whisker-799f4d5486-jhj2l" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.967 [INFO][3993] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" HandleID="k8s-pod-network.43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.983 [INFO][3993] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" HandleID="k8s-pod-network.43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", "pod":"whisker-799f4d5486-jhj2l", "timestamp":"2026-03-14 00:25:34.96797603 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.983 [INFO][3993] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.983 [INFO][3993] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.983 [INFO][3993] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.988 [INFO][3993] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.993 [INFO][3993] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:34.999 [INFO][3993] ipam/ipam.go 526: Trying affinity for 192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:35.002 [INFO][3993] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:35.006 [INFO][3993] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:35.006 [INFO][3993] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:35.008 [INFO][3993] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:35.018 [INFO][3993] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:35.028 [INFO][3993] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.193/26] block=192.168.114.192/26 handle="k8s-pod-network.43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:35.029 [INFO][3993] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.193/26] handle="k8s-pod-network.43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:35.029 [INFO][3993] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:35.094833 containerd[1462]: 2026-03-14 00:25:35.029 [INFO][3993] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.193/26] IPv6=[] ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" HandleID="k8s-pod-network.43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" Mar 14 00:25:35.097419 containerd[1462]: 2026-03-14 00:25:35.034 [INFO][3963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Namespace="calico-system" Pod="whisker-799f4d5486-jhj2l" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0", GenerateName:"whisker-799f4d5486-", Namespace:"calico-system", SelfLink:"", UID:"cc173819-151b-44db-8c92-0c85b31e0b5d", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"799f4d5486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"", Pod:"whisker-799f4d5486-jhj2l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9d63ed9cf20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:35.097419 containerd[1462]: 2026-03-14 00:25:35.034 [INFO][3963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.193/32] ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Namespace="calico-system" Pod="whisker-799f4d5486-jhj2l" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" Mar 14 00:25:35.097419 containerd[1462]: 2026-03-14 00:25:35.035 [INFO][3963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d63ed9cf20 ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Namespace="calico-system" Pod="whisker-799f4d5486-jhj2l" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" Mar 14 00:25:35.097419 containerd[1462]: 2026-03-14 00:25:35.066 [INFO][3963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Namespace="calico-system" Pod="whisker-799f4d5486-jhj2l" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" Mar 14 00:25:35.097419 containerd[1462]: 2026-03-14 00:25:35.066 [INFO][3963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Namespace="calico-system" Pod="whisker-799f4d5486-jhj2l" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0", GenerateName:"whisker-799f4d5486-", Namespace:"calico-system", SelfLink:"", UID:"cc173819-151b-44db-8c92-0c85b31e0b5d", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"799f4d5486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b", Pod:"whisker-799f4d5486-jhj2l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9d63ed9cf20", MAC:"e2:a0:ab:96:03:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:35.097419 containerd[1462]: 2026-03-14 00:25:35.088 [INFO][3963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b" Namespace="calico-system" Pod="whisker-799f4d5486-jhj2l" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--799f4d5486--jhj2l-eth0" Mar 14 00:25:35.140191 containerd[1462]: time="2026-03-14T00:25:35.139995388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:35.140191 containerd[1462]: time="2026-03-14T00:25:35.140096657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:35.140191 containerd[1462]: time="2026-03-14T00:25:35.140153126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:35.141333 containerd[1462]: time="2026-03-14T00:25:35.140306958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:35.179855 systemd[1]: Started cri-containerd-43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b.scope - libcontainer container 43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b. Mar 14 00:25:35.273735 containerd[1462]: time="2026-03-14T00:25:35.272404735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-799f4d5486-jhj2l,Uid:cc173819-151b-44db-8c92-0c85b31e0b5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b\"" Mar 14 00:25:35.277031 containerd[1462]: time="2026-03-14T00:25:35.276985878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:25:35.762047 systemd-networkd[1368]: vxlan.calico: Link UP Mar 14 00:25:35.762070 systemd-networkd[1368]: vxlan.calico: Gained carrier Mar 14 00:25:36.207402 systemd-networkd[1368]: cali9d63ed9cf20: Gained IPv6LL Mar 14 00:25:36.447624 containerd[1462]: time="2026-03-14T00:25:36.447551429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:36.449051 containerd[1462]: time="2026-03-14T00:25:36.448975405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 14 00:25:36.450862 containerd[1462]: time="2026-03-14T00:25:36.450786798Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:36.454399 containerd[1462]: time="2026-03-14T00:25:36.454327081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:36.455513 containerd[1462]: time="2026-03-14T00:25:36.455465823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.178430292s" Mar 14 00:25:36.455628 containerd[1462]: time="2026-03-14T00:25:36.455519239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 14 00:25:36.461509 containerd[1462]: time="2026-03-14T00:25:36.461278873Z" level=info msg="CreateContainer within sandbox \"43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:25:36.481992 containerd[1462]: time="2026-03-14T00:25:36.481930689Z" level=info msg="CreateContainer within sandbox \"43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c7d55284caf9d6574692b2f0be7e51de41efdb15f09c168c2bb92690abfa0afd\"" Mar 14 00:25:36.484752 containerd[1462]: time="2026-03-14T00:25:36.484665495Z" level=info msg="StartContainer for \"c7d55284caf9d6574692b2f0be7e51de41efdb15f09c168c2bb92690abfa0afd\"" Mar 14 00:25:36.537966 systemd[1]: Started cri-containerd-c7d55284caf9d6574692b2f0be7e51de41efdb15f09c168c2bb92690abfa0afd.scope - libcontainer container c7d55284caf9d6574692b2f0be7e51de41efdb15f09c168c2bb92690abfa0afd. Mar 14 00:25:36.597467 containerd[1462]: time="2026-03-14T00:25:36.596812743Z" level=info msg="StartContainer for \"c7d55284caf9d6574692b2f0be7e51de41efdb15f09c168c2bb92690abfa0afd\" returns successfully" Mar 14 00:25:36.599863 containerd[1462]: time="2026-03-14T00:25:36.599480940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:25:37.421866 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Mar 14 00:25:38.094853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544097308.mount: Deactivated successfully. Mar 14 00:25:38.117992 containerd[1462]: time="2026-03-14T00:25:38.117918945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:38.119449 containerd[1462]: time="2026-03-14T00:25:38.119209010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 14 00:25:38.120848 containerd[1462]: time="2026-03-14T00:25:38.120764015Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:38.125103 containerd[1462]: time="2026-03-14T00:25:38.125030030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:38.126425 containerd[1462]: time="2026-03-14T00:25:38.126353843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.526820485s" Mar 14 00:25:38.126425 containerd[1462]: time="2026-03-14T00:25:38.126409349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 14 00:25:38.133227 containerd[1462]: time="2026-03-14T00:25:38.133049139Z" level=info msg="CreateContainer within sandbox \"43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:25:38.163114 containerd[1462]: time="2026-03-14T00:25:38.163047300Z" level=info msg="CreateContainer within sandbox \"43b4e0946209721441300bf051e5c3b488fb868fa56d10df348b8cc66d9add4b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2ab997329fdf4d294b513a022196e9d083ad06c71dd4c6c7b29f9daa68e2e17b\"" Mar 14 00:25:38.164154 containerd[1462]: time="2026-03-14T00:25:38.163873397Z" level=info msg="StartContainer for \"2ab997329fdf4d294b513a022196e9d083ad06c71dd4c6c7b29f9daa68e2e17b\"" Mar 14 00:25:38.217010 systemd[1]: Started cri-containerd-2ab997329fdf4d294b513a022196e9d083ad06c71dd4c6c7b29f9daa68e2e17b.scope - libcontainer container 2ab997329fdf4d294b513a022196e9d083ad06c71dd4c6c7b29f9daa68e2e17b. Mar 14 00:25:38.279881 containerd[1462]: time="2026-03-14T00:25:38.278654019Z" level=info msg="StartContainer for \"2ab997329fdf4d294b513a022196e9d083ad06c71dd4c6c7b29f9daa68e2e17b\" returns successfully" Mar 14 00:25:38.393195 kubelet[2594]: I0314 00:25:38.391904 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-799f4d5486-jhj2l" podStartSLOduration=1.539937674 podStartE2EDuration="4.391881607s" podCreationTimestamp="2026-03-14 00:25:34 +0000 UTC" firstStartedPulling="2026-03-14 00:25:35.276305866 +0000 UTC m=+44.441793112" lastFinishedPulling="2026-03-14 00:25:38.128249779 +0000 UTC m=+47.293737045" observedRunningTime="2026-03-14 00:25:38.390115035 +0000 UTC m=+47.555602310" watchObservedRunningTime="2026-03-14 00:25:38.391881607 +0000 UTC m=+47.557368880" Mar 14 00:25:40.189283 ntpd[1425]: Listen normally on 8 vxlan.calico 192.168.114.192:123 Mar 14 00:25:40.189426 ntpd[1425]: Listen normally on 9 cali9d63ed9cf20 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:25:40.189886 ntpd[1425]: 14 Mar 00:25:40 ntpd[1425]: Listen normally on 8 vxlan.calico 192.168.114.192:123 Mar 14 00:25:40.189886 ntpd[1425]: 14 Mar 00:25:40 ntpd[1425]: Listen normally on 9 cali9d63ed9cf20 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:25:40.189886 ntpd[1425]: 14 Mar 00:25:40 ntpd[1425]: Listen normally on 10 vxlan.calico [fe80::64fb:94ff:fed0:c71d%7]:123 Mar 14 00:25:40.189511 ntpd[1425]: Listen normally on 10 vxlan.calico [fe80::64fb:94ff:fed0:c71d%7]:123 Mar 14 00:25:43.052954 containerd[1462]: time="2026-03-14T00:25:43.052649803Z" level=info msg="StopPodSandbox for \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\"" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.115 [INFO][4262] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.115 [INFO][4262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" iface="eth0" netns="/var/run/netns/cni-ff2aab22-1687-ed13-4029-ead963d2b027" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.116 [INFO][4262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" iface="eth0" netns="/var/run/netns/cni-ff2aab22-1687-ed13-4029-ead963d2b027" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.118 [INFO][4262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" iface="eth0" netns="/var/run/netns/cni-ff2aab22-1687-ed13-4029-ead963d2b027" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.118 [INFO][4262] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.118 [INFO][4262] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.148 [INFO][4269] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" HandleID="k8s-pod-network.02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.148 [INFO][4269] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.148 [INFO][4269] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.159 [WARNING][4269] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" HandleID="k8s-pod-network.02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.159 [INFO][4269] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" HandleID="k8s-pod-network.02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.162 [INFO][4269] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:43.165820 containerd[1462]: 2026-03-14 00:25:43.164 [INFO][4262] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:43.167391 containerd[1462]: time="2026-03-14T00:25:43.166772073Z" level=info msg="TearDown network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\" successfully" Mar 14 00:25:43.167391 containerd[1462]: time="2026-03-14T00:25:43.166817647Z" level=info msg="StopPodSandbox for \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\" returns successfully" Mar 14 00:25:43.172416 systemd[1]: run-netns-cni\x2dff2aab22\x2d1687\x2ded13\x2d4029\x2dead963d2b027.mount: Deactivated successfully. Mar 14 00:25:43.173204 containerd[1462]: time="2026-03-14T00:25:43.172554861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6qx4h,Uid:a7711bee-34a0-429d-9db5-924b7445ab4d,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:43.335593 systemd-networkd[1368]: calif14e1101612: Link UP Mar 14 00:25:43.339042 systemd-networkd[1368]: calif14e1101612: Gained carrier Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.249 [INFO][4276] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0 goldmane-9f7667bb8- calico-system a7711bee-34a0-429d-9db5-924b7445ab4d 951 0 2026-03-14 00:25:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da goldmane-9f7667bb8-6qx4h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif14e1101612 [] [] }} ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qx4h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.249 [INFO][4276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qx4h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.282 [INFO][4287] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" HandleID="k8s-pod-network.f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.293 [INFO][4287] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" HandleID="k8s-pod-network.f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", "pod":"goldmane-9f7667bb8-6qx4h", "timestamp":"2026-03-14 00:25:43.282161527 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b71e0)} Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.293 [INFO][4287] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.294 [INFO][4287] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.294 [INFO][4287] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.297 [INFO][4287] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.302 [INFO][4287] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.307 [INFO][4287] ipam/ipam.go 526: Trying affinity for 192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.310 [INFO][4287] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.314 [INFO][4287] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.314 [INFO][4287] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.316 [INFO][4287] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747 Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.320 [INFO][4287] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.329 [INFO][4287] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.194/26] block=192.168.114.192/26 handle="k8s-pod-network.f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.329 [INFO][4287] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.194/26] handle="k8s-pod-network.f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.329 [INFO][4287] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:43.366156 containerd[1462]: 2026-03-14 00:25:43.329 [INFO][4287] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.194/26] IPv6=[] ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" HandleID="k8s-pod-network.f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.367644 containerd[1462]: 2026-03-14 00:25:43.332 [INFO][4276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qx4h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"a7711bee-34a0-429d-9db5-924b7445ab4d", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"", Pod:"goldmane-9f7667bb8-6qx4h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif14e1101612", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:43.367644 containerd[1462]: 2026-03-14 00:25:43.332 [INFO][4276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.194/32] ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qx4h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.367644 containerd[1462]: 2026-03-14 00:25:43.332 [INFO][4276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif14e1101612 ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qx4h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.367644 containerd[1462]: 2026-03-14 00:25:43.336 [INFO][4276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qx4h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.367644 containerd[1462]: 2026-03-14 00:25:43.336 [INFO][4276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qx4h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"a7711bee-34a0-429d-9db5-924b7445ab4d", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747", Pod:"goldmane-9f7667bb8-6qx4h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif14e1101612", MAC:"fe:68:28:b3:43:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:43.367644 containerd[1462]: 2026-03-14 00:25:43.353 [INFO][4276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747" Namespace="calico-system" Pod="goldmane-9f7667bb8-6qx4h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:43.415216 containerd[1462]: time="2026-03-14T00:25:43.415077835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:43.415416 containerd[1462]: time="2026-03-14T00:25:43.415253977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:43.415416 containerd[1462]: time="2026-03-14T00:25:43.415306567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:43.415552 containerd[1462]: time="2026-03-14T00:25:43.415489597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:43.473370 systemd[1]: Started cri-containerd-f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747.scope - libcontainer container f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747. Mar 14 00:25:43.552785 containerd[1462]: time="2026-03-14T00:25:43.552659874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6qx4h,Uid:a7711bee-34a0-429d-9db5-924b7445ab4d,Namespace:calico-system,Attempt:1,} returns sandbox id \"f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747\"" Mar 14 00:25:43.556103 containerd[1462]: time="2026-03-14T00:25:43.556042873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:25:44.052055 containerd[1462]: time="2026-03-14T00:25:44.051990109Z" level=info msg="StopPodSandbox for \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\"" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.120 [INFO][4369] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.122 [INFO][4369] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" iface="eth0" netns="/var/run/netns/cni-e5482c16-b5da-08f1-aff0-203225e6f7b6" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.123 [INFO][4369] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" iface="eth0" netns="/var/run/netns/cni-e5482c16-b5da-08f1-aff0-203225e6f7b6" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.123 [INFO][4369] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" iface="eth0" netns="/var/run/netns/cni-e5482c16-b5da-08f1-aff0-203225e6f7b6" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.123 [INFO][4369] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.123 [INFO][4369] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.152 [INFO][4377] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" HandleID="k8s-pod-network.66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.152 [INFO][4377] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.152 [INFO][4377] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.163 [WARNING][4377] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" HandleID="k8s-pod-network.66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.163 [INFO][4377] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" HandleID="k8s-pod-network.66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.165 [INFO][4377] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:44.168692 containerd[1462]: 2026-03-14 00:25:44.166 [INFO][4369] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:44.170980 containerd[1462]: time="2026-03-14T00:25:44.170801390Z" level=info msg="TearDown network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\" successfully" Mar 14 00:25:44.170980 containerd[1462]: time="2026-03-14T00:25:44.170843963Z" level=info msg="StopPodSandbox for \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\" returns successfully" Mar 14 00:25:44.175957 containerd[1462]: time="2026-03-14T00:25:44.175914051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559ddfd44c-rtx2t,Uid:b65e270e-651d-4115-b14d-9b8e312de715,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:44.176538 systemd[1]: run-netns-cni\x2de5482c16\x2db5da\x2d08f1\x2daff0\x2d203225e6f7b6.mount: Deactivated successfully. Mar 14 00:25:44.341374 systemd-networkd[1368]: cali921ae8d4eb7: Link UP Mar 14 00:25:44.344403 systemd-networkd[1368]: cali921ae8d4eb7: Gained carrier Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.248 [INFO][4384] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0 calico-kube-controllers-559ddfd44c- calico-system b65e270e-651d-4115-b14d-9b8e312de715 960 0 2026-03-14 00:25:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:559ddfd44c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da calico-kube-controllers-559ddfd44c-rtx2t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali921ae8d4eb7 [] [] }} ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Namespace="calico-system" Pod="calico-kube-controllers-559ddfd44c-rtx2t" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.249 [INFO][4384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Namespace="calico-system" Pod="calico-kube-controllers-559ddfd44c-rtx2t" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.285 [INFO][4395] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" HandleID="k8s-pod-network.2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.297 [INFO][4395] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" HandleID="k8s-pod-network.2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000124ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", "pod":"calico-kube-controllers-559ddfd44c-rtx2t", "timestamp":"2026-03-14 00:25:44.285813223 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004f02c0)} Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.297 [INFO][4395] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.297 [INFO][4395] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.297 [INFO][4395] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.300 [INFO][4395] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.307 [INFO][4395] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.313 [INFO][4395] ipam/ipam.go 526: Trying affinity for 192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.315 [INFO][4395] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.318 [INFO][4395] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.318 [INFO][4395] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.320 [INFO][4395] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0 Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.325 [INFO][4395] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.334 [INFO][4395] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.195/26] block=192.168.114.192/26 handle="k8s-pod-network.2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.334 [INFO][4395] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.195/26] handle="k8s-pod-network.2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.334 [INFO][4395] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:44.366599 containerd[1462]: 2026-03-14 00:25:44.334 [INFO][4395] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.195/26] IPv6=[] ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" HandleID="k8s-pod-network.2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.368884 containerd[1462]: 2026-03-14 00:25:44.336 [INFO][4384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Namespace="calico-system" Pod="calico-kube-controllers-559ddfd44c-rtx2t" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0", GenerateName:"calico-kube-controllers-559ddfd44c-", Namespace:"calico-system", SelfLink:"", UID:"b65e270e-651d-4115-b14d-9b8e312de715", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559ddfd44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"", Pod:"calico-kube-controllers-559ddfd44c-rtx2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali921ae8d4eb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:44.368884 containerd[1462]: 2026-03-14 00:25:44.337 [INFO][4384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.195/32] ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Namespace="calico-system" Pod="calico-kube-controllers-559ddfd44c-rtx2t" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.368884 containerd[1462]: 2026-03-14 00:25:44.337 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali921ae8d4eb7 ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Namespace="calico-system" Pod="calico-kube-controllers-559ddfd44c-rtx2t" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.368884 containerd[1462]: 2026-03-14 00:25:44.345 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Namespace="calico-system" Pod="calico-kube-controllers-559ddfd44c-rtx2t" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.368884 containerd[1462]: 2026-03-14 00:25:44.346 [INFO][4384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Namespace="calico-system" Pod="calico-kube-controllers-559ddfd44c-rtx2t" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0", GenerateName:"calico-kube-controllers-559ddfd44c-", Namespace:"calico-system", SelfLink:"", UID:"b65e270e-651d-4115-b14d-9b8e312de715", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559ddfd44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0", Pod:"calico-kube-controllers-559ddfd44c-rtx2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali921ae8d4eb7", MAC:"6e:f2:dc:01:39:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:44.368884 containerd[1462]: 2026-03-14 00:25:44.363 [INFO][4384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0" Namespace="calico-system" Pod="calico-kube-controllers-559ddfd44c-rtx2t" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:44.419982 containerd[1462]: time="2026-03-14T00:25:44.419825146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:44.419982 containerd[1462]: time="2026-03-14T00:25:44.419923177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:44.420290 containerd[1462]: time="2026-03-14T00:25:44.419959751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:44.420290 containerd[1462]: time="2026-03-14T00:25:44.420118643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:44.472255 systemd[1]: Started cri-containerd-2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0.scope - libcontainer container 2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0. Mar 14 00:25:44.590214 containerd[1462]: time="2026-03-14T00:25:44.590105555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559ddfd44c-rtx2t,Uid:b65e270e-651d-4115-b14d-9b8e312de715,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0\"" Mar 14 00:25:44.909255 systemd-networkd[1368]: calif14e1101612: Gained IPv6LL Mar 14 00:25:45.053088 containerd[1462]: time="2026-03-14T00:25:45.053036250Z" level=info msg="StopPodSandbox for \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\"" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.179 [INFO][4478] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.179 [INFO][4478] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" iface="eth0" netns="/var/run/netns/cni-a3ea0717-aafc-2acf-4d24-e3addcc4be35" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.179 [INFO][4478] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" iface="eth0" netns="/var/run/netns/cni-a3ea0717-aafc-2acf-4d24-e3addcc4be35" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.181 [INFO][4478] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" iface="eth0" netns="/var/run/netns/cni-a3ea0717-aafc-2acf-4d24-e3addcc4be35" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.181 [INFO][4478] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.181 [INFO][4478] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.229 [INFO][4485] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" HandleID="k8s-pod-network.1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.230 [INFO][4485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.230 [INFO][4485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.245 [WARNING][4485] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" HandleID="k8s-pod-network.1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.245 [INFO][4485] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" HandleID="k8s-pod-network.1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.248 [INFO][4485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:45.255353 containerd[1462]: 2026-03-14 00:25:45.251 [INFO][4478] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:45.258950 containerd[1462]: time="2026-03-14T00:25:45.258799623Z" level=info msg="TearDown network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\" successfully" Mar 14 00:25:45.258950 containerd[1462]: time="2026-03-14T00:25:45.258847940Z" level=info msg="StopPodSandbox for \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\" returns successfully" Mar 14 00:25:45.261450 systemd[1]: run-netns-cni\x2da3ea0717\x2daafc\x2d2acf\x2d4d24\x2de3addcc4be35.mount: Deactivated successfully. Mar 14 00:25:45.269264 containerd[1462]: time="2026-03-14T00:25:45.268657719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb9f9f6b-snvzj,Uid:c1a7d672-fe4a-4281-9f30-d5ed4679c445,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:45.528920 systemd-networkd[1368]: calibeae7dec6ce: Link UP Mar 14 00:25:45.530759 systemd-networkd[1368]: calibeae7dec6ce: Gained carrier Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.380 [INFO][4492] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0 calico-apiserver-5dbb9f9f6b- calico-system c1a7d672-fe4a-4281-9f30-d5ed4679c445 966 0 2026-03-14 00:25:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dbb9f9f6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da calico-apiserver-5dbb9f9f6b-snvzj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibeae7dec6ce [] [] }} ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-snvzj" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.380 [INFO][4492] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-snvzj" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.443 [INFO][4505] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" HandleID="k8s-pod-network.f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.455 [INFO][4505] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" HandleID="k8s-pod-network.f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", "pod":"calico-apiserver-5dbb9f9f6b-snvzj", "timestamp":"2026-03-14 00:25:45.443612107 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004218c0)} Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.455 [INFO][4505] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.455 [INFO][4505] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.455 [INFO][4505] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.459 [INFO][4505] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.466 [INFO][4505] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.474 [INFO][4505] ipam/ipam.go 526: Trying affinity for 192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.478 [INFO][4505] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.485 [INFO][4505] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.485 [INFO][4505] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.488 [INFO][4505] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.501 [INFO][4505] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.516 [INFO][4505] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.196/26] block=192.168.114.192/26 handle="k8s-pod-network.f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.517 [INFO][4505] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.196/26] handle="k8s-pod-network.f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.519 [INFO][4505] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:45.560669 containerd[1462]: 2026-03-14 00:25:45.519 [INFO][4505] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.196/26] IPv6=[] ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" HandleID="k8s-pod-network.f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.563868 containerd[1462]: 2026-03-14 00:25:45.523 [INFO][4492] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-snvzj" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0", GenerateName:"calico-apiserver-5dbb9f9f6b-", Namespace:"calico-system", SelfLink:"", UID:"c1a7d672-fe4a-4281-9f30-d5ed4679c445", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb9f9f6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"", Pod:"calico-apiserver-5dbb9f9f6b-snvzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibeae7dec6ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:45.563868 containerd[1462]: 2026-03-14 00:25:45.523 [INFO][4492] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.196/32] ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-snvzj" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.563868 containerd[1462]: 2026-03-14 00:25:45.523 [INFO][4492] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeae7dec6ce ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-snvzj" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.563868 containerd[1462]: 2026-03-14 00:25:45.527 [INFO][4492] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-snvzj" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.563868 containerd[1462]: 2026-03-14 00:25:45.530 [INFO][4492] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-snvzj" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0", GenerateName:"calico-apiserver-5dbb9f9f6b-", Namespace:"calico-system", SelfLink:"", UID:"c1a7d672-fe4a-4281-9f30-d5ed4679c445", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb9f9f6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd", Pod:"calico-apiserver-5dbb9f9f6b-snvzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibeae7dec6ce", MAC:"52:a8:c3:4c:6b:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:45.563868 containerd[1462]: 2026-03-14 00:25:45.551 [INFO][4492] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-snvzj" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:45.621904 containerd[1462]: time="2026-03-14T00:25:45.619934033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:45.621904 containerd[1462]: time="2026-03-14T00:25:45.620011797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:45.621904 containerd[1462]: time="2026-03-14T00:25:45.620040876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:45.621904 containerd[1462]: time="2026-03-14T00:25:45.620162259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:45.678772 systemd-networkd[1368]: cali921ae8d4eb7: Gained IPv6LL Mar 14 00:25:45.695000 systemd[1]: Started cri-containerd-f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd.scope - libcontainer container f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd. Mar 14 00:25:45.916365 containerd[1462]: time="2026-03-14T00:25:45.916287098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb9f9f6b-snvzj,Uid:c1a7d672-fe4a-4281-9f30-d5ed4679c445,Namespace:calico-system,Attempt:1,} returns sandbox id \"f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd\"" Mar 14 00:25:46.051969 containerd[1462]: time="2026-03-14T00:25:46.051905218Z" level=info msg="StopPodSandbox for \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\"" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.157 [INFO][4582] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.158 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" iface="eth0" netns="/var/run/netns/cni-b8e65d0a-b8e3-eb70-4d61-787686565521" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.159 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" iface="eth0" netns="/var/run/netns/cni-b8e65d0a-b8e3-eb70-4d61-787686565521" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.160 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" iface="eth0" netns="/var/run/netns/cni-b8e65d0a-b8e3-eb70-4d61-787686565521" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.160 [INFO][4582] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.162 [INFO][4582] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.208 [INFO][4590] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" HandleID="k8s-pod-network.bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.209 [INFO][4590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.209 [INFO][4590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.235 [WARNING][4590] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" HandleID="k8s-pod-network.bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.235 [INFO][4590] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" HandleID="k8s-pod-network.bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.238 [INFO][4590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:46.251483 containerd[1462]: 2026-03-14 00:25:46.245 [INFO][4582] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:46.253133 containerd[1462]: time="2026-03-14T00:25:46.251324365Z" level=info msg="TearDown network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\" successfully" Mar 14 00:25:46.253133 containerd[1462]: time="2026-03-14T00:25:46.252960047Z" level=info msg="StopPodSandbox for \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\" returns successfully" Mar 14 00:25:46.261993 systemd[1]: run-containerd-runc-k8s.io-f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd-runc.T8OhOT.mount: Deactivated successfully. Mar 14 00:25:46.263494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069602263.mount: Deactivated successfully. Mar 14 00:25:46.263636 systemd[1]: run-netns-cni\x2db8e65d0a\x2db8e3\x2deb70\x2d4d61\x2d787686565521.mount: Deactivated successfully. Mar 14 00:25:46.264651 containerd[1462]: time="2026-03-14T00:25:46.264570349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-q9d7h,Uid:133fdf5a-fd04-49b7-9129-b4e1bf634740,Namespace:kube-system,Attempt:1,}" Mar 14 00:25:46.550805 systemd-networkd[1368]: calic6663fc0cb7: Link UP Mar 14 00:25:46.553201 systemd-networkd[1368]: calic6663fc0cb7: Gained carrier Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.404 [INFO][4611] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0 coredns-7d764666f9- kube-system 133fdf5a-fd04-49b7-9129-b4e1bf634740 975 0 2026-03-14 00:24:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da coredns-7d764666f9-q9d7h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic6663fc0cb7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Namespace="kube-system" Pod="coredns-7d764666f9-q9d7h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.404 [INFO][4611] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Namespace="kube-system" Pod="coredns-7d764666f9-q9d7h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.466 [INFO][4624] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" HandleID="k8s-pod-network.68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.484 [INFO][4624] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" HandleID="k8s-pod-network.68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000379470), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", "pod":"coredns-7d764666f9-q9d7h", "timestamp":"2026-03-14 00:25:46.466382028 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005422c0)} Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.484 [INFO][4624] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.484 [INFO][4624] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.484 [INFO][4624] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.487 [INFO][4624] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.495 [INFO][4624] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.503 [INFO][4624] ipam/ipam.go 526: Trying affinity for 192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.507 [INFO][4624] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.511 [INFO][4624] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.511 [INFO][4624] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.514 [INFO][4624] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.521 [INFO][4624] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.536 [INFO][4624] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.197/26] block=192.168.114.192/26 handle="k8s-pod-network.68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.536 [INFO][4624] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.197/26] handle="k8s-pod-network.68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.536 [INFO][4624] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:46.589790 containerd[1462]: 2026-03-14 00:25:46.536 [INFO][4624] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.197/26] IPv6=[] ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" HandleID="k8s-pod-network.68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.592036 containerd[1462]: 2026-03-14 00:25:46.540 [INFO][4611] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Namespace="kube-system" Pod="coredns-7d764666f9-q9d7h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"133fdf5a-fd04-49b7-9129-b4e1bf634740", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"", Pod:"coredns-7d764666f9-q9d7h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6663fc0cb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:46.592036 containerd[1462]: 2026-03-14 00:25:46.540 [INFO][4611] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.197/32] ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Namespace="kube-system" Pod="coredns-7d764666f9-q9d7h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.592036 containerd[1462]: 2026-03-14 00:25:46.540 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6663fc0cb7 ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Namespace="kube-system" Pod="coredns-7d764666f9-q9d7h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.592036 containerd[1462]: 2026-03-14 00:25:46.554 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Namespace="kube-system" Pod="coredns-7d764666f9-q9d7h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.592351 containerd[1462]: 2026-03-14 00:25:46.555 [INFO][4611] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Namespace="kube-system" Pod="coredns-7d764666f9-q9d7h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"133fdf5a-fd04-49b7-9129-b4e1bf634740", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf", Pod:"coredns-7d764666f9-q9d7h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6663fc0cb7", MAC:"d6:79:83:00:77:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:46.592351 containerd[1462]: 2026-03-14 00:25:46.582 [INFO][4611] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf" Namespace="kube-system" Pod="coredns-7d764666f9-q9d7h" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:46.659098 containerd[1462]: time="2026-03-14T00:25:46.658235140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:46.660479 containerd[1462]: time="2026-03-14T00:25:46.658797288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:46.660479 containerd[1462]: time="2026-03-14T00:25:46.658847680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:46.660479 containerd[1462]: time="2026-03-14T00:25:46.659024599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:46.739978 systemd[1]: Started cri-containerd-68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf.scope - libcontainer container 68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf. Mar 14 00:25:46.847556 containerd[1462]: time="2026-03-14T00:25:46.847404046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-q9d7h,Uid:133fdf5a-fd04-49b7-9129-b4e1bf634740,Namespace:kube-system,Attempt:1,} returns sandbox id \"68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf\"" Mar 14 00:25:46.862836 containerd[1462]: time="2026-03-14T00:25:46.862081026Z" level=info msg="CreateContainer within sandbox \"68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:25:46.891077 containerd[1462]: time="2026-03-14T00:25:46.890977253Z" level=info msg="CreateContainer within sandbox \"68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2491ea90f3c8f64dae9df3c1c07a893a47f5c04e8d8d23ba0a0855c783d2590f\"" Mar 14 00:25:46.893461 containerd[1462]: time="2026-03-14T00:25:46.893419488Z" level=info msg="StartContainer for \"2491ea90f3c8f64dae9df3c1c07a893a47f5c04e8d8d23ba0a0855c783d2590f\"" Mar 14 00:25:46.950072 systemd[1]: Started cri-containerd-2491ea90f3c8f64dae9df3c1c07a893a47f5c04e8d8d23ba0a0855c783d2590f.scope - libcontainer container 2491ea90f3c8f64dae9df3c1c07a893a47f5c04e8d8d23ba0a0855c783d2590f. Mar 14 00:25:47.016580 containerd[1462]: time="2026-03-14T00:25:47.016414166Z" level=info msg="StartContainer for \"2491ea90f3c8f64dae9df3c1c07a893a47f5c04e8d8d23ba0a0855c783d2590f\" returns successfully" Mar 14 00:25:47.052639 containerd[1462]: time="2026-03-14T00:25:47.052170959Z" level=info msg="StopPodSandbox for \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\"" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.197 [INFO][4732] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.198 [INFO][4732] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" iface="eth0" netns="/var/run/netns/cni-07d050b7-1ca4-1906-4f9a-aaf7148376fe" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.199 [INFO][4732] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" iface="eth0" netns="/var/run/netns/cni-07d050b7-1ca4-1906-4f9a-aaf7148376fe" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.199 [INFO][4732] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" iface="eth0" netns="/var/run/netns/cni-07d050b7-1ca4-1906-4f9a-aaf7148376fe" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.199 [INFO][4732] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.199 [INFO][4732] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.237 [INFO][4744] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" HandleID="k8s-pod-network.ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.238 [INFO][4744] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.238 [INFO][4744] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.250 [WARNING][4744] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" HandleID="k8s-pod-network.ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.250 [INFO][4744] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" HandleID="k8s-pod-network.ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.255 [INFO][4744] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:47.260749 containerd[1462]: 2026-03-14 00:25:47.257 [INFO][4732] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:47.260749 containerd[1462]: time="2026-03-14T00:25:47.260608879Z" level=info msg="TearDown network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\" successfully" Mar 14 00:25:47.260749 containerd[1462]: time="2026-03-14T00:25:47.260647918Z" level=info msg="StopPodSandbox for \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\" returns successfully" Mar 14 00:25:47.271688 containerd[1462]: time="2026-03-14T00:25:47.269265971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb9f9f6b-fmc6z,Uid:e9476b50-628d-4ade-b7da-1ee31561d583,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:47.275881 systemd[1]: run-netns-cni\x2d07d050b7\x2d1ca4\x2d1906\x2d4f9a\x2daaf7148376fe.mount: Deactivated successfully. Mar 14 00:25:47.467594 kubelet[2594]: I0314 00:25:47.467399 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-q9d7h" podStartSLOduration=50.467374439 podStartE2EDuration="50.467374439s" podCreationTimestamp="2026-03-14 00:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:25:47.466365225 +0000 UTC m=+56.631852498" watchObservedRunningTime="2026-03-14 00:25:47.467374439 +0000 UTC m=+56.632861712" Mar 14 00:25:47.533831 systemd-networkd[1368]: calibeae7dec6ce: Gained IPv6LL Mar 14 00:25:47.593811 systemd-networkd[1368]: calie0ff5c1e173: Link UP Mar 14 00:25:47.594214 systemd-networkd[1368]: calie0ff5c1e173: Gained carrier Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.389 [INFO][4750] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0 calico-apiserver-5dbb9f9f6b- calico-system e9476b50-628d-4ade-b7da-1ee31561d583 983 0 2026-03-14 00:25:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dbb9f9f6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da calico-apiserver-5dbb9f9f6b-fmc6z eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie0ff5c1e173 [] [] }} ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-fmc6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.390 [INFO][4750] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-fmc6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.452 [INFO][4762] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" HandleID="k8s-pod-network.ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.483 [INFO][4762] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" HandleID="k8s-pod-network.ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdef0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", "pod":"calico-apiserver-5dbb9f9f6b-fmc6z", "timestamp":"2026-03-14 00:25:47.452914017 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188840)} Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.484 [INFO][4762] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.484 [INFO][4762] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.484 [INFO][4762] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.490 [INFO][4762] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.504 [INFO][4762] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.530 [INFO][4762] ipam/ipam.go 526: Trying affinity for 192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.536 [INFO][4762] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.545 [INFO][4762] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.545 [INFO][4762] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.550 [INFO][4762] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.559 [INFO][4762] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.572 [INFO][4762] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.198/26] block=192.168.114.192/26 handle="k8s-pod-network.ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.573 [INFO][4762] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.198/26] handle="k8s-pod-network.ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.574 [INFO][4762] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:47.635170 containerd[1462]: 2026-03-14 00:25:47.574 [INFO][4762] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.198/26] IPv6=[] ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" HandleID="k8s-pod-network.ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.636373 containerd[1462]: 2026-03-14 00:25:47.582 [INFO][4750] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-fmc6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0", GenerateName:"calico-apiserver-5dbb9f9f6b-", Namespace:"calico-system", SelfLink:"", UID:"e9476b50-628d-4ade-b7da-1ee31561d583", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb9f9f6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"", Pod:"calico-apiserver-5dbb9f9f6b-fmc6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie0ff5c1e173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:47.636373 containerd[1462]: 2026-03-14 00:25:47.583 [INFO][4750] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.198/32] ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-fmc6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.636373 containerd[1462]: 2026-03-14 00:25:47.583 [INFO][4750] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0ff5c1e173 ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-fmc6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.636373 containerd[1462]: 2026-03-14 00:25:47.598 [INFO][4750] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-fmc6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.636373 containerd[1462]: 2026-03-14 00:25:47.601 [INFO][4750] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-fmc6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0", GenerateName:"calico-apiserver-5dbb9f9f6b-", Namespace:"calico-system", SelfLink:"", UID:"e9476b50-628d-4ade-b7da-1ee31561d583", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb9f9f6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b", Pod:"calico-apiserver-5dbb9f9f6b-fmc6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie0ff5c1e173", MAC:"82:ad:fb:e7:32:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:47.636373 containerd[1462]: 2026-03-14 00:25:47.626 [INFO][4750] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b" Namespace="calico-system" Pod="calico-apiserver-5dbb9f9f6b-fmc6z" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:47.661488 systemd-networkd[1368]: calic6663fc0cb7: Gained IPv6LL Mar 14 00:25:47.701499 containerd[1462]: time="2026-03-14T00:25:47.700920774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:47.701499 containerd[1462]: time="2026-03-14T00:25:47.701038972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:47.701499 containerd[1462]: time="2026-03-14T00:25:47.701070058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:47.701499 containerd[1462]: time="2026-03-14T00:25:47.701201771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:47.769974 systemd[1]: Started cri-containerd-ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b.scope - libcontainer container ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b. Mar 14 00:25:47.786073 containerd[1462]: time="2026-03-14T00:25:47.785921125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:47.790147 containerd[1462]: time="2026-03-14T00:25:47.788916286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 14 00:25:47.791196 containerd[1462]: time="2026-03-14T00:25:47.791148956Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:47.796124 containerd[1462]: time="2026-03-14T00:25:47.796074420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:47.800045 containerd[1462]: time="2026-03-14T00:25:47.800000326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.243901163s" Mar 14 00:25:47.800980 containerd[1462]: time="2026-03-14T00:25:47.800212180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 14 00:25:47.806976 containerd[1462]: time="2026-03-14T00:25:47.806074218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:25:47.812803 containerd[1462]: time="2026-03-14T00:25:47.812625429Z" level=info msg="CreateContainer within sandbox \"f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:25:47.839732 containerd[1462]: time="2026-03-14T00:25:47.839210047Z" level=info msg="CreateContainer within sandbox \"f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0071198f8fba5017f084cf886b0a272be8b72e8296d7f5f1c3f246fdc68afdf1\"" Mar 14 00:25:47.840748 containerd[1462]: time="2026-03-14T00:25:47.840685021Z" level=info msg="StartContainer for \"0071198f8fba5017f084cf886b0a272be8b72e8296d7f5f1c3f246fdc68afdf1\"" Mar 14 00:25:47.883482 containerd[1462]: time="2026-03-14T00:25:47.883293226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb9f9f6b-fmc6z,Uid:e9476b50-628d-4ade-b7da-1ee31561d583,Namespace:calico-system,Attempt:1,} returns sandbox id \"ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b\"" Mar 14 00:25:47.913064 systemd[1]: Started cri-containerd-0071198f8fba5017f084cf886b0a272be8b72e8296d7f5f1c3f246fdc68afdf1.scope - libcontainer container 0071198f8fba5017f084cf886b0a272be8b72e8296d7f5f1c3f246fdc68afdf1. Mar 14 00:25:47.977675 containerd[1462]: time="2026-03-14T00:25:47.977364709Z" level=info msg="StartContainer for \"0071198f8fba5017f084cf886b0a272be8b72e8296d7f5f1c3f246fdc68afdf1\" returns successfully" Mar 14 00:25:48.050687 containerd[1462]: time="2026-03-14T00:25:48.050622756Z" level=info msg="StopPodSandbox for \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\"" Mar 14 00:25:48.055292 containerd[1462]: time="2026-03-14T00:25:48.054828021Z" level=info msg="StopPodSandbox for \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\"" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.151 [INFO][4889] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.155 [INFO][4889] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" iface="eth0" netns="/var/run/netns/cni-5ef988c7-c791-d369-c733-3657c4aeb4fb" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.156 [INFO][4889] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" iface="eth0" netns="/var/run/netns/cni-5ef988c7-c791-d369-c733-3657c4aeb4fb" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.156 [INFO][4889] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" iface="eth0" netns="/var/run/netns/cni-5ef988c7-c791-d369-c733-3657c4aeb4fb" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.156 [INFO][4889] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.157 [INFO][4889] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.200 [INFO][4904] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" HandleID="k8s-pod-network.ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.201 [INFO][4904] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.201 [INFO][4904] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.212 [WARNING][4904] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" HandleID="k8s-pod-network.ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.212 [INFO][4904] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" HandleID="k8s-pod-network.ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.215 [INFO][4904] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:48.220011 containerd[1462]: 2026-03-14 00:25:48.217 [INFO][4889] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:48.221310 containerd[1462]: time="2026-03-14T00:25:48.221258891Z" level=info msg="TearDown network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\" successfully" Mar 14 00:25:48.221429 containerd[1462]: time="2026-03-14T00:25:48.221406590Z" level=info msg="StopPodSandbox for \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\" returns successfully" Mar 14 00:25:48.227510 containerd[1462]: time="2026-03-14T00:25:48.227469699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8swjn,Uid:a9e9eb5c-f4d2-40c6-a693-3d1291777ac5,Namespace:kube-system,Attempt:1,}" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.147 [INFO][4892] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.156 [INFO][4892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" iface="eth0" netns="/var/run/netns/cni-a8280ec4-ed69-bb9a-8d5b-f442d9604eef" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.156 [INFO][4892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" iface="eth0" netns="/var/run/netns/cni-a8280ec4-ed69-bb9a-8d5b-f442d9604eef" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.157 [INFO][4892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" iface="eth0" netns="/var/run/netns/cni-a8280ec4-ed69-bb9a-8d5b-f442d9604eef" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.157 [INFO][4892] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.157 [INFO][4892] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.220 [INFO][4906] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" HandleID="k8s-pod-network.310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.220 [INFO][4906] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.220 [INFO][4906] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.234 [WARNING][4906] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" HandleID="k8s-pod-network.310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.234 [INFO][4906] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" HandleID="k8s-pod-network.310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.236 [INFO][4906] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:48.241399 containerd[1462]: 2026-03-14 00:25:48.239 [INFO][4892] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:48.242682 containerd[1462]: time="2026-03-14T00:25:48.241799296Z" level=info msg="TearDown network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\" successfully" Mar 14 00:25:48.242682 containerd[1462]: time="2026-03-14T00:25:48.241837776Z" level=info msg="StopPodSandbox for \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\" returns successfully" Mar 14 00:25:48.249135 containerd[1462]: time="2026-03-14T00:25:48.249073283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6w7hq,Uid:00362120-b53e-4112-9493-f945eb34a049,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:48.268091 systemd[1]: run-netns-cni\x2d5ef988c7\x2dc791\x2dd369\x2dc733\x2d3657c4aeb4fb.mount: Deactivated successfully. Mar 14 00:25:48.268247 systemd[1]: run-netns-cni\x2da8280ec4\x2ded69\x2dbb9a\x2d8d5b\x2df442d9604eef.mount: Deactivated successfully. Mar 14 00:25:48.541196 systemd-networkd[1368]: cali42db452de00: Link UP Mar 14 00:25:48.541618 systemd-networkd[1368]: cali42db452de00: Gained carrier Mar 14 00:25:48.570731 kubelet[2594]: I0314 00:25:48.570634 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-6qx4h" podStartSLOduration=34.321931028 podStartE2EDuration="38.570594811s" podCreationTimestamp="2026-03-14 00:25:10 +0000 UTC" firstStartedPulling="2026-03-14 00:25:43.555423177 +0000 UTC m=+52.720910425" lastFinishedPulling="2026-03-14 00:25:47.804086946 +0000 UTC m=+56.969574208" observedRunningTime="2026-03-14 00:25:48.495934399 +0000 UTC m=+57.661421672" watchObservedRunningTime="2026-03-14 00:25:48.570594811 +0000 UTC m=+57.736082080" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.352 [INFO][4919] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0 coredns-7d764666f9- kube-system a9e9eb5c-f4d2-40c6-a693-3d1291777ac5 1005 0 2026-03-14 00:24:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da coredns-7d764666f9-8swjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali42db452de00 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Namespace="kube-system" Pod="coredns-7d764666f9-8swjn" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.352 [INFO][4919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Namespace="kube-system" Pod="coredns-7d764666f9-8swjn" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.408 [INFO][4943] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" HandleID="k8s-pod-network.332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.424 [INFO][4943] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" HandleID="k8s-pod-network.332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", "pod":"coredns-7d764666f9-8swjn", "timestamp":"2026-03-14 00:25:48.408337605 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000284dc0)} Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.424 [INFO][4943] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.424 [INFO][4943] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.424 [INFO][4943] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.429 [INFO][4943] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.441 [INFO][4943] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.463 [INFO][4943] ipam/ipam.go 526: Trying affinity for 192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.472 [INFO][4943] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.479 [INFO][4943] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.479 [INFO][4943] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.483 [INFO][4943] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335 Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.506 [INFO][4943] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.522 [INFO][4943] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.199/26] block=192.168.114.192/26 handle="k8s-pod-network.332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.522 [INFO][4943] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.199/26] handle="k8s-pod-network.332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.522 [INFO][4943] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:48.582511 containerd[1462]: 2026-03-14 00:25:48.522 [INFO][4943] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.199/26] IPv6=[] ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" HandleID="k8s-pod-network.332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.585559 containerd[1462]: 2026-03-14 00:25:48.527 [INFO][4919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Namespace="kube-system" Pod="coredns-7d764666f9-8swjn" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a9e9eb5c-f4d2-40c6-a693-3d1291777ac5", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"", Pod:"coredns-7d764666f9-8swjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42db452de00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:48.585559 containerd[1462]: 2026-03-14 00:25:48.527 [INFO][4919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.199/32] ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Namespace="kube-system" Pod="coredns-7d764666f9-8swjn" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.585559 containerd[1462]: 2026-03-14 00:25:48.527 [INFO][4919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42db452de00 ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Namespace="kube-system" Pod="coredns-7d764666f9-8swjn" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.585559 containerd[1462]: 2026-03-14 00:25:48.542 [INFO][4919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Namespace="kube-system" Pod="coredns-7d764666f9-8swjn" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.585951 containerd[1462]: 2026-03-14 00:25:48.542 [INFO][4919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Namespace="kube-system" Pod="coredns-7d764666f9-8swjn" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a9e9eb5c-f4d2-40c6-a693-3d1291777ac5", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335", Pod:"coredns-7d764666f9-8swjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42db452de00", MAC:"3a:f1:92:89:d1:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:48.585951 containerd[1462]: 2026-03-14 00:25:48.575 [INFO][4919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335" Namespace="kube-system" Pod="coredns-7d764666f9-8swjn" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:48.624434 systemd-networkd[1368]: cali8ce741fafe2: Link UP Mar 14 00:25:48.629927 systemd-networkd[1368]: cali8ce741fafe2: Gained carrier Mar 14 00:25:48.664244 containerd[1462]: time="2026-03-14T00:25:48.664097649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:48.664244 containerd[1462]: time="2026-03-14T00:25:48.664189017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:48.664539 containerd[1462]: time="2026-03-14T00:25:48.664215692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:48.664539 containerd[1462]: time="2026-03-14T00:25:48.664341871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.377 [INFO][4930] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0 csi-node-driver- calico-system 00362120-b53e-4112-9493-f945eb34a049 1004 0 2026-03-14 00:25:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da csi-node-driver-6w7hq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8ce741fafe2 [] [] }} ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Namespace="calico-system" Pod="csi-node-driver-6w7hq" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.377 [INFO][4930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Namespace="calico-system" Pod="csi-node-driver-6w7hq" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.439 [INFO][4950] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" HandleID="k8s-pod-network.88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.467 [INFO][4950] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" HandleID="k8s-pod-network.88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e160), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", "pod":"csi-node-driver-6w7hq", "timestamp":"2026-03-14 00:25:48.439844076 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000376580)} Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.467 [INFO][4950] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.522 [INFO][4950] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.523 [INFO][4950] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da' Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.529 [INFO][4950] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.545 [INFO][4950] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.562 [INFO][4950] ipam/ipam.go 526: Trying affinity for 192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.576 [INFO][4950] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.579 [INFO][4950] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.579 [INFO][4950] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.583 [INFO][4950] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3 Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.593 [INFO][4950] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.607 [INFO][4950] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.200/26] block=192.168.114.192/26 handle="k8s-pod-network.88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.608 [INFO][4950] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.200/26] handle="k8s-pod-network.88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" host="ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da" Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.608 [INFO][4950] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:48.672748 containerd[1462]: 2026-03-14 00:25:48.608 [INFO][4950] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.200/26] IPv6=[] ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" HandleID="k8s-pod-network.88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.674971 containerd[1462]: 2026-03-14 00:25:48.614 [INFO][4930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Namespace="calico-system" Pod="csi-node-driver-6w7hq" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00362120-b53e-4112-9493-f945eb34a049", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"", Pod:"csi-node-driver-6w7hq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ce741fafe2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:48.674971 containerd[1462]: 2026-03-14 00:25:48.615 [INFO][4930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.200/32] ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Namespace="calico-system" Pod="csi-node-driver-6w7hq" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.674971 containerd[1462]: 2026-03-14 00:25:48.615 [INFO][4930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ce741fafe2 ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Namespace="calico-system" Pod="csi-node-driver-6w7hq" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.674971 containerd[1462]: 2026-03-14 00:25:48.635 [INFO][4930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Namespace="calico-system" Pod="csi-node-driver-6w7hq" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.674971 containerd[1462]: 2026-03-14 00:25:48.638 [INFO][4930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Namespace="calico-system" Pod="csi-node-driver-6w7hq" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00362120-b53e-4112-9493-f945eb34a049", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3", Pod:"csi-node-driver-6w7hq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ce741fafe2", MAC:"4e:00:ec:7a:06:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:48.674971 containerd[1462]: 2026-03-14 00:25:48.664 [INFO][4930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3" Namespace="calico-system" Pod="csi-node-driver-6w7hq" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:48.736778 containerd[1462]: time="2026-03-14T00:25:48.735357982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:48.736778 containerd[1462]: time="2026-03-14T00:25:48.735439898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:48.736778 containerd[1462]: time="2026-03-14T00:25:48.735466704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:48.736778 containerd[1462]: time="2026-03-14T00:25:48.735581202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:48.746665 systemd[1]: Started cri-containerd-332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335.scope - libcontainer container 332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335. Mar 14 00:25:48.787186 systemd[1]: Started cri-containerd-88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3.scope - libcontainer container 88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3. Mar 14 00:25:48.889258 containerd[1462]: time="2026-03-14T00:25:48.888779990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8swjn,Uid:a9e9eb5c-f4d2-40c6-a693-3d1291777ac5,Namespace:kube-system,Attempt:1,} returns sandbox id \"332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335\"" Mar 14 00:25:48.909754 containerd[1462]: time="2026-03-14T00:25:48.907413085Z" level=info msg="CreateContainer within sandbox \"332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:25:48.934542 containerd[1462]: time="2026-03-14T00:25:48.934451874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6w7hq,Uid:00362120-b53e-4112-9493-f945eb34a049,Namespace:calico-system,Attempt:1,} returns sandbox id \"88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3\"" Mar 14 00:25:48.954920 containerd[1462]: time="2026-03-14T00:25:48.954734705Z" level=info msg="CreateContainer within sandbox \"332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61cb7d70aa9cd3a62c92ace22c69c4fcccbd1b10a331c1391c9a43216f28bd08\"" Mar 14 00:25:48.960029 containerd[1462]: time="2026-03-14T00:25:48.959916281Z" level=info msg="StartContainer for \"61cb7d70aa9cd3a62c92ace22c69c4fcccbd1b10a331c1391c9a43216f28bd08\"" Mar 14 00:25:49.042862 systemd[1]: Started cri-containerd-61cb7d70aa9cd3a62c92ace22c69c4fcccbd1b10a331c1391c9a43216f28bd08.scope - libcontainer container 61cb7d70aa9cd3a62c92ace22c69c4fcccbd1b10a331c1391c9a43216f28bd08. Mar 14 00:25:49.116836 containerd[1462]: time="2026-03-14T00:25:49.116781276Z" level=info msg="StartContainer for \"61cb7d70aa9cd3a62c92ace22c69c4fcccbd1b10a331c1391c9a43216f28bd08\" returns successfully" Mar 14 00:25:49.197563 systemd-networkd[1368]: calie0ff5c1e173: Gained IPv6LL Mar 14 00:25:49.540569 kubelet[2594]: I0314 00:25:49.540484 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-8swjn" podStartSLOduration=52.540463682 podStartE2EDuration="52.540463682s" podCreationTimestamp="2026-03-14 00:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:25:49.494885245 +0000 UTC m=+58.660372517" watchObservedRunningTime="2026-03-14 00:25:49.540463682 +0000 UTC m=+58.705950961" Mar 14 00:25:49.964919 systemd-networkd[1368]: cali42db452de00: Gained IPv6LL Mar 14 00:25:50.413966 systemd-networkd[1368]: cali8ce741fafe2: Gained IPv6LL Mar 14 00:25:50.996731 containerd[1462]: time="2026-03-14T00:25:50.996011284Z" level=info msg="StopPodSandbox for \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\"" Mar 14 00:25:51.003749 containerd[1462]: time="2026-03-14T00:25:51.003617380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:51.007910 containerd[1462]: time="2026-03-14T00:25:51.007851636Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:51.008528 containerd[1462]: time="2026-03-14T00:25:51.008466589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 14 00:25:51.012989 containerd[1462]: time="2026-03-14T00:25:51.012929159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:51.015050 containerd[1462]: time="2026-03-14T00:25:51.015000558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.208875629s" Mar 14 00:25:51.015184 containerd[1462]: time="2026-03-14T00:25:51.015053877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 14 00:25:51.018155 containerd[1462]: time="2026-03-14T00:25:51.017212288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:25:51.069117 containerd[1462]: time="2026-03-14T00:25:51.069066568Z" level=info msg="CreateContainer within sandbox \"2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:25:51.099976 containerd[1462]: time="2026-03-14T00:25:51.099919140Z" level=info msg="CreateContainer within sandbox \"2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"38e1957b405bfe062560962aff2844a24ed14f15fb830c0c0450a9d7dc340602\"" Mar 14 00:25:51.104833 containerd[1462]: time="2026-03-14T00:25:51.104787270Z" level=info msg="StartContainer for \"38e1957b405bfe062560962aff2844a24ed14f15fb830c0c0450a9d7dc340602\"" Mar 14 00:25:51.185957 systemd[1]: Started cri-containerd-38e1957b405bfe062560962aff2844a24ed14f15fb830c0c0450a9d7dc340602.scope - libcontainer container 38e1957b405bfe062560962aff2844a24ed14f15fb830c0c0450a9d7dc340602. Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.113 [WARNING][5205] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a9e9eb5c-f4d2-40c6-a693-3d1291777ac5", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335", Pod:"coredns-7d764666f9-8swjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42db452de00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.114 [INFO][5205] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.114 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" iface="eth0" netns="" Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.114 [INFO][5205] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.114 [INFO][5205] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.194 [INFO][5217] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" HandleID="k8s-pod-network.ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.194 [INFO][5217] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.194 [INFO][5217] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.204 [WARNING][5217] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" HandleID="k8s-pod-network.ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.204 [INFO][5217] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" HandleID="k8s-pod-network.ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.207 [INFO][5217] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:51.211364 containerd[1462]: 2026-03-14 00:25:51.209 [INFO][5205] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:51.212947 containerd[1462]: time="2026-03-14T00:25:51.211829234Z" level=info msg="TearDown network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\" successfully" Mar 14 00:25:51.212947 containerd[1462]: time="2026-03-14T00:25:51.212478274Z" level=info msg="StopPodSandbox for \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\" returns successfully" Mar 14 00:25:51.214346 containerd[1462]: time="2026-03-14T00:25:51.213902037Z" level=info msg="RemovePodSandbox for \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\"" Mar 14 00:25:51.214346 containerd[1462]: time="2026-03-14T00:25:51.213948357Z" level=info msg="Forcibly stopping sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\"" Mar 14 00:25:51.279394 containerd[1462]: time="2026-03-14T00:25:51.279244336Z" level=info msg="StartContainer for \"38e1957b405bfe062560962aff2844a24ed14f15fb830c0c0450a9d7dc340602\" returns successfully" Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.286 [WARNING][5253] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a9e9eb5c-f4d2-40c6-a693-3d1291777ac5", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"332f1128ed75db8b329620d005b3c893ed6d568a5c885ee48f794ac752b01335", Pod:"coredns-7d764666f9-8swjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42db452de00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.287 [INFO][5253] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.287 [INFO][5253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" iface="eth0" netns="" Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.287 [INFO][5253] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.287 [INFO][5253] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.331 [INFO][5268] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" HandleID="k8s-pod-network.ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.331 [INFO][5268] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.331 [INFO][5268] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.354 [WARNING][5268] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" HandleID="k8s-pod-network.ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.354 [INFO][5268] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" HandleID="k8s-pod-network.ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--8swjn-eth0" Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.357 [INFO][5268] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:51.362228 containerd[1462]: 2026-03-14 00:25:51.359 [INFO][5253] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b" Mar 14 00:25:51.363065 containerd[1462]: time="2026-03-14T00:25:51.362263107Z" level=info msg="TearDown network for sandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\" successfully" Mar 14 00:25:51.369156 containerd[1462]: time="2026-03-14T00:25:51.368962790Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:51.369156 containerd[1462]: time="2026-03-14T00:25:51.369073670Z" level=info msg="RemovePodSandbox \"ec71bb9e3acfbefd37ff02e304d1b00b11e1aaf4fafcee4c43acfd5e62dc491b\" returns successfully" Mar 14 00:25:51.370666 containerd[1462]: time="2026-03-14T00:25:51.370389760Z" level=info msg="StopPodSandbox for \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\"" Mar 14 00:25:51.531544 kubelet[2594]: I0314 00:25:51.530506 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-559ddfd44c-rtx2t" podStartSLOduration=33.168208922 podStartE2EDuration="39.530489125s" podCreationTimestamp="2026-03-14 00:25:12 +0000 UTC" firstStartedPulling="2026-03-14 00:25:44.654533997 +0000 UTC m=+53.820021246" lastFinishedPulling="2026-03-14 00:25:51.016814177 +0000 UTC m=+60.182301449" observedRunningTime="2026-03-14 00:25:51.526967918 +0000 UTC m=+60.692455190" watchObservedRunningTime="2026-03-14 00:25:51.530489125 +0000 UTC m=+60.695976398" Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.457 [WARNING][5291] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0", GenerateName:"calico-apiserver-5dbb9f9f6b-", Namespace:"calico-system", SelfLink:"", UID:"e9476b50-628d-4ade-b7da-1ee31561d583", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb9f9f6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b", Pod:"calico-apiserver-5dbb9f9f6b-fmc6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie0ff5c1e173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.457 [INFO][5291] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.457 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" iface="eth0" netns="" Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.457 [INFO][5291] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.457 [INFO][5291] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.544 [INFO][5300] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" HandleID="k8s-pod-network.ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.544 [INFO][5300] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.544 [INFO][5300] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.567 [WARNING][5300] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" HandleID="k8s-pod-network.ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.568 [INFO][5300] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" HandleID="k8s-pod-network.ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.578 [INFO][5300] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:51.585088 containerd[1462]: 2026-03-14 00:25:51.582 [INFO][5291] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:51.585088 containerd[1462]: time="2026-03-14T00:25:51.584934593Z" level=info msg="TearDown network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\" successfully" Mar 14 00:25:51.585088 containerd[1462]: time="2026-03-14T00:25:51.584972343Z" level=info msg="StopPodSandbox for \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\" returns successfully" Mar 14 00:25:51.587142 containerd[1462]: time="2026-03-14T00:25:51.586870420Z" level=info msg="RemovePodSandbox for \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\"" Mar 14 00:25:51.587142 containerd[1462]: time="2026-03-14T00:25:51.586937052Z" level=info msg="Forcibly stopping sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\"" Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.677 [WARNING][5333] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0", GenerateName:"calico-apiserver-5dbb9f9f6b-", Namespace:"calico-system", SelfLink:"", UID:"e9476b50-628d-4ade-b7da-1ee31561d583", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb9f9f6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b", Pod:"calico-apiserver-5dbb9f9f6b-fmc6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie0ff5c1e173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.678 [INFO][5333] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.678 [INFO][5333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" iface="eth0" netns="" Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.678 [INFO][5333] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.678 [INFO][5333] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.726 [INFO][5345] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" HandleID="k8s-pod-network.ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.729 [INFO][5345] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.729 [INFO][5345] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.738 [WARNING][5345] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" HandleID="k8s-pod-network.ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.738 [INFO][5345] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" HandleID="k8s-pod-network.ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--fmc6z-eth0" Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.740 [INFO][5345] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:51.744855 containerd[1462]: 2026-03-14 00:25:51.742 [INFO][5333] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a" Mar 14 00:25:51.745871 containerd[1462]: time="2026-03-14T00:25:51.744947763Z" level=info msg="TearDown network for sandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\" successfully" Mar 14 00:25:51.750464 containerd[1462]: time="2026-03-14T00:25:51.750384761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:51.750590 containerd[1462]: time="2026-03-14T00:25:51.750476108Z" level=info msg="RemovePodSandbox \"ed00cf7e3aa41c13e3079de21a499a122c4df38941beefe653d696a2dff9103a\" returns successfully" Mar 14 00:25:51.751484 containerd[1462]: time="2026-03-14T00:25:51.751169472Z" level=info msg="StopPodSandbox for \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\"" Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.806 [WARNING][5359] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0", GenerateName:"calico-apiserver-5dbb9f9f6b-", Namespace:"calico-system", SelfLink:"", UID:"c1a7d672-fe4a-4281-9f30-d5ed4679c445", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb9f9f6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd", Pod:"calico-apiserver-5dbb9f9f6b-snvzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibeae7dec6ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.807 [INFO][5359] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.807 [INFO][5359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" iface="eth0" netns="" Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.807 [INFO][5359] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.807 [INFO][5359] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.841 [INFO][5366] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" HandleID="k8s-pod-network.1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.841 [INFO][5366] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.841 [INFO][5366] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.852 [WARNING][5366] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" HandleID="k8s-pod-network.1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.852 [INFO][5366] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" HandleID="k8s-pod-network.1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.857 [INFO][5366] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:51.861247 containerd[1462]: 2026-03-14 00:25:51.859 [INFO][5359] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:51.862221 containerd[1462]: time="2026-03-14T00:25:51.861289798Z" level=info msg="TearDown network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\" successfully" Mar 14 00:25:51.862221 containerd[1462]: time="2026-03-14T00:25:51.861322264Z" level=info msg="StopPodSandbox for \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\" returns successfully" Mar 14 00:25:51.863026 containerd[1462]: time="2026-03-14T00:25:51.862974791Z" level=info msg="RemovePodSandbox for \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\"" Mar 14 00:25:51.863026 containerd[1462]: time="2026-03-14T00:25:51.863020508Z" level=info msg="Forcibly stopping sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\"" Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.910 [WARNING][5380] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0", GenerateName:"calico-apiserver-5dbb9f9f6b-", Namespace:"calico-system", SelfLink:"", UID:"c1a7d672-fe4a-4281-9f30-d5ed4679c445", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb9f9f6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd", Pod:"calico-apiserver-5dbb9f9f6b-snvzj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibeae7dec6ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.910 [INFO][5380] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.910 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" iface="eth0" netns="" Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.910 [INFO][5380] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.910 [INFO][5380] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.943 [INFO][5387] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" HandleID="k8s-pod-network.1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.943 [INFO][5387] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.943 [INFO][5387] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.953 [WARNING][5387] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" HandleID="k8s-pod-network.1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.953 [INFO][5387] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" HandleID="k8s-pod-network.1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--apiserver--5dbb9f9f6b--snvzj-eth0" Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.955 [INFO][5387] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:51.962913 containerd[1462]: 2026-03-14 00:25:51.957 [INFO][5380] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca" Mar 14 00:25:51.966122 containerd[1462]: time="2026-03-14T00:25:51.962922601Z" level=info msg="TearDown network for sandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\" successfully" Mar 14 00:25:51.970188 containerd[1462]: time="2026-03-14T00:25:51.969651242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:51.970188 containerd[1462]: time="2026-03-14T00:25:51.969763289Z" level=info msg="RemovePodSandbox \"1acb8e84fb653496f45b1768a34a8a6cb6e51fed37227120703282c5479d63ca\" returns successfully" Mar 14 00:25:51.970796 containerd[1462]: time="2026-03-14T00:25:51.970423879Z" level=info msg="StopPodSandbox for \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\"" Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.044 [WARNING][5401] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0", GenerateName:"calico-kube-controllers-559ddfd44c-", Namespace:"calico-system", SelfLink:"", UID:"b65e270e-651d-4115-b14d-9b8e312de715", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559ddfd44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0", Pod:"calico-kube-controllers-559ddfd44c-rtx2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali921ae8d4eb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.044 [INFO][5401] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.044 [INFO][5401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" iface="eth0" netns="" Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.044 [INFO][5401] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.044 [INFO][5401] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.098 [INFO][5409] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" HandleID="k8s-pod-network.66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.099 [INFO][5409] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.099 [INFO][5409] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.112 [WARNING][5409] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" HandleID="k8s-pod-network.66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.112 [INFO][5409] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" HandleID="k8s-pod-network.66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.115 [INFO][5409] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:52.120417 containerd[1462]: 2026-03-14 00:25:52.117 [INFO][5401] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:52.120417 containerd[1462]: time="2026-03-14T00:25:52.120381240Z" level=info msg="TearDown network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\" successfully" Mar 14 00:25:52.124722 containerd[1462]: time="2026-03-14T00:25:52.120420753Z" level=info msg="StopPodSandbox for \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\" returns successfully" Mar 14 00:25:52.124722 containerd[1462]: time="2026-03-14T00:25:52.122555180Z" level=info msg="RemovePodSandbox for \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\"" Mar 14 00:25:52.124722 containerd[1462]: time="2026-03-14T00:25:52.122609360Z" level=info msg="Forcibly stopping sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\"" Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.215 [WARNING][5423] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0", GenerateName:"calico-kube-controllers-559ddfd44c-", Namespace:"calico-system", SelfLink:"", UID:"b65e270e-651d-4115-b14d-9b8e312de715", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559ddfd44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"2f88a60b26e268976b9288e7315eef96162e0d8c97839d77c9db8143e60bf5d0", Pod:"calico-kube-controllers-559ddfd44c-rtx2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali921ae8d4eb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.215 [INFO][5423] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.215 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" iface="eth0" netns="" Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.215 [INFO][5423] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.215 [INFO][5423] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.263 [INFO][5430] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" HandleID="k8s-pod-network.66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.264 [INFO][5430] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.264 [INFO][5430] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.275 [WARNING][5430] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" HandleID="k8s-pod-network.66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.275 [INFO][5430] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" HandleID="k8s-pod-network.66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-calico--kube--controllers--559ddfd44c--rtx2t-eth0" Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.281 [INFO][5430] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:52.287454 containerd[1462]: 2026-03-14 00:25:52.285 [INFO][5423] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4" Mar 14 00:25:52.288423 containerd[1462]: time="2026-03-14T00:25:52.287511557Z" level=info msg="TearDown network for sandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\" successfully" Mar 14 00:25:52.303439 containerd[1462]: time="2026-03-14T00:25:52.303372716Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:52.303613 containerd[1462]: time="2026-03-14T00:25:52.303498630Z" level=info msg="RemovePodSandbox \"66860f9b10ab70f1a6ac6f5d6b185a32a3f5d54517f070d528405c9a01bd5fd4\" returns successfully" Mar 14 00:25:52.304830 containerd[1462]: time="2026-03-14T00:25:52.304406342Z" level=info msg="StopPodSandbox for \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\"" Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.387 [WARNING][5444] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.387 [INFO][5444] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.387 [INFO][5444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" iface="eth0" netns="" Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.387 [INFO][5444] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.387 [INFO][5444] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.423 [INFO][5451] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" HandleID="k8s-pod-network.22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.423 [INFO][5451] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.423 [INFO][5451] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.432 [WARNING][5451] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" HandleID="k8s-pod-network.22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.432 [INFO][5451] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" HandleID="k8s-pod-network.22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.435 [INFO][5451] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:52.442813 containerd[1462]: 2026-03-14 00:25:52.438 [INFO][5444] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:52.445236 containerd[1462]: time="2026-03-14T00:25:52.444758346Z" level=info msg="TearDown network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\" successfully" Mar 14 00:25:52.445236 containerd[1462]: time="2026-03-14T00:25:52.444805157Z" level=info msg="StopPodSandbox for \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\" returns successfully" Mar 14 00:25:52.446133 containerd[1462]: time="2026-03-14T00:25:52.446053710Z" level=info msg="RemovePodSandbox for \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\"" Mar 14 00:25:52.446133 containerd[1462]: time="2026-03-14T00:25:52.446094242Z" level=info msg="Forcibly stopping sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\"" Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.518 [WARNING][5465] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" WorkloadEndpoint="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.519 [INFO][5465] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.520 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" iface="eth0" netns="" Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.520 [INFO][5465] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.520 [INFO][5465] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.574 [INFO][5476] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" HandleID="k8s-pod-network.22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.574 [INFO][5476] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.574 [INFO][5476] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.587 [WARNING][5476] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" HandleID="k8s-pod-network.22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.587 [INFO][5476] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" HandleID="k8s-pod-network.22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-whisker--ff48b58b9--qcnlj-eth0" Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.591 [INFO][5476] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:52.596826 containerd[1462]: 2026-03-14 00:25:52.594 [INFO][5465] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca" Mar 14 00:25:52.596826 containerd[1462]: time="2026-03-14T00:25:52.596808428Z" level=info msg="TearDown network for sandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\" successfully" Mar 14 00:25:52.604921 containerd[1462]: time="2026-03-14T00:25:52.604480871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:52.604921 containerd[1462]: time="2026-03-14T00:25:52.604572569Z" level=info msg="RemovePodSandbox \"22d86fc8e20c78e8e8607cc660b98942d72ce7cbacebe5555c11c73cf29feaca\" returns successfully" Mar 14 00:25:52.606132 containerd[1462]: time="2026-03-14T00:25:52.605997386Z" level=info msg="StopPodSandbox for \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\"" Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.671 [WARNING][5490] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"133fdf5a-fd04-49b7-9129-b4e1bf634740", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf", Pod:"coredns-7d764666f9-q9d7h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6663fc0cb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.672 [INFO][5490] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.672 [INFO][5490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" iface="eth0" netns="" Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.672 [INFO][5490] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.672 [INFO][5490] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.719 [INFO][5497] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" HandleID="k8s-pod-network.bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.720 [INFO][5497] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.720 [INFO][5497] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.743 [WARNING][5497] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" HandleID="k8s-pod-network.bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.743 [INFO][5497] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" HandleID="k8s-pod-network.bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.746 [INFO][5497] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:52.754078 containerd[1462]: 2026-03-14 00:25:52.749 [INFO][5490] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:52.754078 containerd[1462]: time="2026-03-14T00:25:52.753029481Z" level=info msg="TearDown network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\" successfully" Mar 14 00:25:52.754078 containerd[1462]: time="2026-03-14T00:25:52.753066175Z" level=info msg="StopPodSandbox for \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\" returns successfully" Mar 14 00:25:52.756249 containerd[1462]: time="2026-03-14T00:25:52.755966522Z" level=info msg="RemovePodSandbox for \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\"" Mar 14 00:25:52.756249 containerd[1462]: time="2026-03-14T00:25:52.756007553Z" level=info msg="Forcibly stopping sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\"" Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.828 [WARNING][5511] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"133fdf5a-fd04-49b7-9129-b4e1bf634740", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"68b1532e03a41282c9c06191f5d435f3365bff3d381bcd74cbee6fd05425eccf", Pod:"coredns-7d764666f9-q9d7h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6663fc0cb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.828 [INFO][5511] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.828 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" iface="eth0" netns="" Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.828 [INFO][5511] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.828 [INFO][5511] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.875 [INFO][5518] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" HandleID="k8s-pod-network.bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.875 [INFO][5518] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.875 [INFO][5518] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.887 [WARNING][5518] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" HandleID="k8s-pod-network.bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.887 [INFO][5518] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" HandleID="k8s-pod-network.bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-coredns--7d764666f9--q9d7h-eth0" Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.890 [INFO][5518] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:52.897370 containerd[1462]: 2026-03-14 00:25:52.894 [INFO][5511] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6" Mar 14 00:25:52.898665 containerd[1462]: time="2026-03-14T00:25:52.897428811Z" level=info msg="TearDown network for sandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\" successfully" Mar 14 00:25:52.905585 containerd[1462]: time="2026-03-14T00:25:52.904864357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:52.905585 containerd[1462]: time="2026-03-14T00:25:52.904959462Z" level=info msg="RemovePodSandbox \"bfadd59b2fba9d3a09352202507a7bdc86e2108a551bfc830c2faf369176c2b6\" returns successfully" Mar 14 00:25:52.907967 containerd[1462]: time="2026-03-14T00:25:52.907543446Z" level=info msg="StopPodSandbox for \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\"" Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:52.984 [WARNING][5533] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"a7711bee-34a0-429d-9db5-924b7445ab4d", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747", Pod:"goldmane-9f7667bb8-6qx4h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif14e1101612", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:52.986 [INFO][5533] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:52.986 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" iface="eth0" netns="" Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:52.986 [INFO][5533] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:52.986 [INFO][5533] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:53.048 [INFO][5541] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" HandleID="k8s-pod-network.02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:53.049 [INFO][5541] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:53.049 [INFO][5541] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:53.065 [WARNING][5541] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" HandleID="k8s-pod-network.02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:53.065 [INFO][5541] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" HandleID="k8s-pod-network.02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:53.068 [INFO][5541] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:53.074655 containerd[1462]: 2026-03-14 00:25:53.071 [INFO][5533] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:53.076329 containerd[1462]: time="2026-03-14T00:25:53.074682427Z" level=info msg="TearDown network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\" successfully" Mar 14 00:25:53.076329 containerd[1462]: time="2026-03-14T00:25:53.074732569Z" level=info msg="StopPodSandbox for \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\" returns successfully" Mar 14 00:25:53.076502 containerd[1462]: time="2026-03-14T00:25:53.076282636Z" level=info msg="RemovePodSandbox for \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\"" Mar 14 00:25:53.076502 containerd[1462]: time="2026-03-14T00:25:53.076364172Z" level=info msg="Forcibly stopping sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\"" Mar 14 00:25:53.189380 ntpd[1425]: Listen normally on 11 calif14e1101612 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:25:53.190040 ntpd[1425]: 14 Mar 00:25:53 ntpd[1425]: Listen normally on 11 calif14e1101612 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:25:53.190040 ntpd[1425]: 14 Mar 00:25:53 ntpd[1425]: Listen normally on 12 cali921ae8d4eb7 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:25:53.190040 ntpd[1425]: 14 Mar 00:25:53 ntpd[1425]: Listen normally on 13 calibeae7dec6ce [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:25:53.190040 ntpd[1425]: 14 Mar 00:25:53 ntpd[1425]: Listen normally on 14 calic6663fc0cb7 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:25:53.190040 ntpd[1425]: 14 Mar 00:25:53 ntpd[1425]: Listen normally on 15 calie0ff5c1e173 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 14 00:25:53.189486 ntpd[1425]: Listen normally on 12 cali921ae8d4eb7 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:25:53.189545 ntpd[1425]: Listen normally on 13 calibeae7dec6ce [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:25:53.189604 ntpd[1425]: Listen normally on 14 calic6663fc0cb7 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:25:53.189662 ntpd[1425]: Listen normally on 15 calie0ff5c1e173 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 14 00:25:53.190895 ntpd[1425]: 14 Mar 00:25:53 ntpd[1425]: Listen normally on 16 cali42db452de00 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 14 00:25:53.190895 ntpd[1425]: 14 Mar 00:25:53 ntpd[1425]: Listen normally on 17 cali8ce741fafe2 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 14 00:25:53.190786 ntpd[1425]: Listen normally on 16 cali42db452de00 [fe80::ecee:eeff:feee:eeee%13]:123 Mar 14 00:25:53.190875 ntpd[1425]: Listen normally on 17 cali8ce741fafe2 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.149 [WARNING][5555] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"a7711bee-34a0-429d-9db5-924b7445ab4d", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"f9a03e71bde37cc28de13c6f517e61ce35219eead3cce20a843b6cb18c14b747", Pod:"goldmane-9f7667bb8-6qx4h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif14e1101612", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.150 [INFO][5555] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.150 [INFO][5555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" iface="eth0" netns="" Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.150 [INFO][5555] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.151 [INFO][5555] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.210 [INFO][5562] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" HandleID="k8s-pod-network.02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.211 [INFO][5562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.211 [INFO][5562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.225 [WARNING][5562] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" HandleID="k8s-pod-network.02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.225 [INFO][5562] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" HandleID="k8s-pod-network.02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-goldmane--9f7667bb8--6qx4h-eth0" Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.228 [INFO][5562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:53.239551 containerd[1462]: 2026-03-14 00:25:53.233 [INFO][5555] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027" Mar 14 00:25:53.239551 containerd[1462]: time="2026-03-14T00:25:53.238868379Z" level=info msg="TearDown network for sandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\" successfully" Mar 14 00:25:53.245474 containerd[1462]: time="2026-03-14T00:25:53.244348733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:53.245474 containerd[1462]: time="2026-03-14T00:25:53.244418612Z" level=info msg="RemovePodSandbox \"02be2fe5a5f0224a8101b776cb9be38434f271bcd223fd611b1789e070c29027\" returns successfully" Mar 14 00:25:53.246860 containerd[1462]: time="2026-03-14T00:25:53.246371974Z" level=info msg="StopPodSandbox for \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\"" Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.319 [WARNING][5576] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00362120-b53e-4112-9493-f945eb34a049", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3", Pod:"csi-node-driver-6w7hq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ce741fafe2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.319 [INFO][5576] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.319 [INFO][5576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" iface="eth0" netns="" Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.319 [INFO][5576] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.320 [INFO][5576] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.378 [INFO][5584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" HandleID="k8s-pod-network.310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.378 [INFO][5584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.378 [INFO][5584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.402 [WARNING][5584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" HandleID="k8s-pod-network.310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.402 [INFO][5584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" HandleID="k8s-pod-network.310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.405 [INFO][5584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:53.413507 containerd[1462]: 2026-03-14 00:25:53.410 [INFO][5576] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:53.415131 containerd[1462]: time="2026-03-14T00:25:53.415088878Z" level=info msg="TearDown network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\" successfully" Mar 14 00:25:53.415253 containerd[1462]: time="2026-03-14T00:25:53.415231619Z" level=info msg="StopPodSandbox for \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\" returns successfully" Mar 14 00:25:53.416651 containerd[1462]: time="2026-03-14T00:25:53.416027143Z" level=info msg="RemovePodSandbox for \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\"" Mar 14 00:25:53.416651 containerd[1462]: time="2026-03-14T00:25:53.416074505Z" level=info msg="Forcibly stopping sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\"" Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.492 [WARNING][5598] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00362120-b53e-4112-9493-f945eb34a049", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20260313-2100-3a01cf555f99608275da", ContainerID:"88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3", Pod:"csi-node-driver-6w7hq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ce741fafe2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.492 [INFO][5598] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.492 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" iface="eth0" netns="" Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.492 [INFO][5598] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.492 [INFO][5598] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.539 [INFO][5605] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" HandleID="k8s-pod-network.310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.539 [INFO][5605] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.539 [INFO][5605] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.555 [WARNING][5605] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" HandleID="k8s-pod-network.310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.555 [INFO][5605] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" HandleID="k8s-pod-network.310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Workload="ci--4081--3--6--nightly--20260313--2100--3a01cf555f99608275da-k8s-csi--node--driver--6w7hq-eth0" Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.558 [INFO][5605] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:53.564385 containerd[1462]: 2026-03-14 00:25:53.560 [INFO][5598] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2" Mar 14 00:25:53.564385 containerd[1462]: time="2026-03-14T00:25:53.564270710Z" level=info msg="TearDown network for sandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\" successfully" Mar 14 00:25:53.570785 containerd[1462]: time="2026-03-14T00:25:53.570733258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:53.570922 containerd[1462]: time="2026-03-14T00:25:53.570828631Z" level=info msg="RemovePodSandbox \"310e419934aab7d62e94c6f1cd10186cbf13a3a63b5406e9b7792862ad86c2c2\" returns successfully" Mar 14 00:25:54.235294 containerd[1462]: time="2026-03-14T00:25:54.235226012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:54.236614 containerd[1462]: time="2026-03-14T00:25:54.236537280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 14 00:25:54.238026 containerd[1462]: time="2026-03-14T00:25:54.237789018Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:54.241165 containerd[1462]: time="2026-03-14T00:25:54.241081660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:54.242452 containerd[1462]: time="2026-03-14T00:25:54.242270213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.225013468s" Mar 14 00:25:54.242452 containerd[1462]: time="2026-03-14T00:25:54.242315798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:25:54.244747 containerd[1462]: time="2026-03-14T00:25:54.244517881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:25:54.250279 containerd[1462]: time="2026-03-14T00:25:54.250237496Z" level=info msg="CreateContainer within sandbox \"f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:25:54.267823 containerd[1462]: time="2026-03-14T00:25:54.267750576Z" level=info msg="CreateContainer within sandbox \"f3d2fae5e01693620c75986b12c4b123c1986ce7e0b7ebb0cfc533c81593c0fd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"809647f5382decc313bfcd909daf47e5ae3793f309ec4529b66cad517fbe1e80\"" Mar 14 00:25:54.270836 containerd[1462]: time="2026-03-14T00:25:54.270568253Z" level=info msg="StartContainer for \"809647f5382decc313bfcd909daf47e5ae3793f309ec4529b66cad517fbe1e80\"" Mar 14 00:25:54.331946 systemd[1]: Started cri-containerd-809647f5382decc313bfcd909daf47e5ae3793f309ec4529b66cad517fbe1e80.scope - libcontainer container 809647f5382decc313bfcd909daf47e5ae3793f309ec4529b66cad517fbe1e80. Mar 14 00:25:54.402054 containerd[1462]: time="2026-03-14T00:25:54.400918522Z" level=info msg="StartContainer for \"809647f5382decc313bfcd909daf47e5ae3793f309ec4529b66cad517fbe1e80\" returns successfully" Mar 14 00:25:54.491966 containerd[1462]: time="2026-03-14T00:25:54.491805818Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:54.495894 containerd[1462]: time="2026-03-14T00:25:54.495827048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:25:54.501499 containerd[1462]: time="2026-03-14T00:25:54.501403116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 256.840828ms" Mar 14 00:25:54.502844 containerd[1462]: time="2026-03-14T00:25:54.501503850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:25:54.507511 containerd[1462]: time="2026-03-14T00:25:54.507469897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:25:54.513312 containerd[1462]: time="2026-03-14T00:25:54.513263586Z" level=info msg="CreateContainer within sandbox \"ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:25:54.532111 containerd[1462]: time="2026-03-14T00:25:54.532036863Z" level=info msg="CreateContainer within sandbox \"ea8e744eddbeb71b8348fcffe4993a4f44dadaba6713f3b9801a87465613863b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca57c516cef71e931ac6313c13fddbf7a776a4d5c5b818a1fc18a36e2a8d5bfe\"" Mar 14 00:25:54.533437 containerd[1462]: time="2026-03-14T00:25:54.533392375Z" level=info msg="StartContainer for \"ca57c516cef71e931ac6313c13fddbf7a776a4d5c5b818a1fc18a36e2a8d5bfe\"" Mar 14 00:25:54.604242 systemd[1]: Started cri-containerd-ca57c516cef71e931ac6313c13fddbf7a776a4d5c5b818a1fc18a36e2a8d5bfe.scope - libcontainer container ca57c516cef71e931ac6313c13fddbf7a776a4d5c5b818a1fc18a36e2a8d5bfe. Mar 14 00:25:54.693996 containerd[1462]: time="2026-03-14T00:25:54.693846064Z" level=info msg="StartContainer for \"ca57c516cef71e931ac6313c13fddbf7a776a4d5c5b818a1fc18a36e2a8d5bfe\" returns successfully" Mar 14 00:25:55.587894 kubelet[2594]: I0314 00:25:55.585985 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5dbb9f9f6b-snvzj" podStartSLOduration=37.264215305 podStartE2EDuration="45.585964435s" podCreationTimestamp="2026-03-14 00:25:10 +0000 UTC" firstStartedPulling="2026-03-14 00:25:45.922104232 +0000 UTC m=+55.087591498" lastFinishedPulling="2026-03-14 00:25:54.243853365 +0000 UTC m=+63.409340628" observedRunningTime="2026-03-14 00:25:54.574558521 +0000 UTC m=+63.740045796" watchObservedRunningTime="2026-03-14 00:25:55.585964435 +0000 UTC m=+64.751451707" Mar 14 00:25:56.217889 containerd[1462]: time="2026-03-14T00:25:56.217828157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:56.219961 containerd[1462]: time="2026-03-14T00:25:56.219895717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 14 00:25:56.222867 containerd[1462]: time="2026-03-14T00:25:56.222817560Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:56.226897 containerd[1462]: time="2026-03-14T00:25:56.226846466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:56.227798 containerd[1462]: time="2026-03-14T00:25:56.227752707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.720231283s" Mar 14 00:25:56.227898 containerd[1462]: time="2026-03-14T00:25:56.227803512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 14 00:25:56.240390 containerd[1462]: time="2026-03-14T00:25:56.240339211Z" level=info msg="CreateContainer within sandbox \"88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:25:56.257973 containerd[1462]: time="2026-03-14T00:25:56.257912077Z" level=info msg="CreateContainer within sandbox \"88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d05347e2412adfd738c07799602f6c3e28a5beefce74b3d885d9edc4ac49d146\"" Mar 14 00:25:56.258959 containerd[1462]: time="2026-03-14T00:25:56.258922520Z" level=info msg="StartContainer for \"d05347e2412adfd738c07799602f6c3e28a5beefce74b3d885d9edc4ac49d146\"" Mar 14 00:25:56.340982 systemd[1]: Started cri-containerd-d05347e2412adfd738c07799602f6c3e28a5beefce74b3d885d9edc4ac49d146.scope - libcontainer container d05347e2412adfd738c07799602f6c3e28a5beefce74b3d885d9edc4ac49d146. Mar 14 00:25:56.481079 containerd[1462]: time="2026-03-14T00:25:56.480435273Z" level=info msg="StartContainer for \"d05347e2412adfd738c07799602f6c3e28a5beefce74b3d885d9edc4ac49d146\" returns successfully" Mar 14 00:25:56.484999 containerd[1462]: time="2026-03-14T00:25:56.484936965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:25:56.564526 kubelet[2594]: I0314 00:25:56.563470 2594 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:25:57.663730 kubelet[2594]: I0314 00:25:57.662604 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5dbb9f9f6b-fmc6z" podStartSLOduration=41.047746308 podStartE2EDuration="47.662582265s" podCreationTimestamp="2026-03-14 00:25:10 +0000 UTC" firstStartedPulling="2026-03-14 00:25:47.889260305 +0000 UTC m=+57.054747554" lastFinishedPulling="2026-03-14 00:25:54.504096241 +0000 UTC m=+63.669583511" observedRunningTime="2026-03-14 00:25:55.587371789 +0000 UTC m=+64.752859091" watchObservedRunningTime="2026-03-14 00:25:57.662582265 +0000 UTC m=+66.828069541" Mar 14 00:25:58.085172 containerd[1462]: time="2026-03-14T00:25:58.085100874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:58.086596 containerd[1462]: time="2026-03-14T00:25:58.086517517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 14 00:25:58.088462 containerd[1462]: time="2026-03-14T00:25:58.088355658Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:58.092482 containerd[1462]: time="2026-03-14T00:25:58.092201779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:58.094686 containerd[1462]: time="2026-03-14T00:25:58.094493491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.60948873s" Mar 14 00:25:58.094686 containerd[1462]: time="2026-03-14T00:25:58.094545834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 14 00:25:58.102452 containerd[1462]: time="2026-03-14T00:25:58.102401687Z" level=info msg="CreateContainer within sandbox \"88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:25:58.125718 containerd[1462]: time="2026-03-14T00:25:58.125502635Z" level=info msg="CreateContainer within sandbox \"88f77a11ef51f6a3850558e915c79c0fc84a7da7194a3ce7df6572514d913ca3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2bff7727658a449226ad32be36f78c8956ded5f1f57eb18547ecef9b1467e3d1\"" Mar 14 00:25:58.128272 containerd[1462]: time="2026-03-14T00:25:58.126934733Z" level=info msg="StartContainer for \"2bff7727658a449226ad32be36f78c8956ded5f1f57eb18547ecef9b1467e3d1\"" Mar 14 00:25:58.192006 systemd[1]: Started cri-containerd-2bff7727658a449226ad32be36f78c8956ded5f1f57eb18547ecef9b1467e3d1.scope - libcontainer container 2bff7727658a449226ad32be36f78c8956ded5f1f57eb18547ecef9b1467e3d1. Mar 14 00:25:58.240788 containerd[1462]: time="2026-03-14T00:25:58.240235005Z" level=info msg="StartContainer for \"2bff7727658a449226ad32be36f78c8956ded5f1f57eb18547ecef9b1467e3d1\" returns successfully" Mar 14 00:25:59.150964 kubelet[2594]: I0314 00:25:59.150923 2594 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:25:59.150964 kubelet[2594]: I0314 00:25:59.150970 2594 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:26:03.412104 systemd[1]: Started sshd@11-10.128.0.67:22-4.153.228.146:50552.service - OpenSSH per-connection server daemon (4.153.228.146:50552). Mar 14 00:26:03.596762 kubelet[2594]: I0314 00:26:03.595246 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-6w7hq" podStartSLOduration=42.442274343 podStartE2EDuration="51.595213618s" podCreationTimestamp="2026-03-14 00:25:12 +0000 UTC" firstStartedPulling="2026-03-14 00:25:48.943464726 +0000 UTC m=+58.108951998" lastFinishedPulling="2026-03-14 00:25:58.096404026 +0000 UTC m=+67.261891273" observedRunningTime="2026-03-14 00:25:58.591362294 +0000 UTC m=+67.756849567" watchObservedRunningTime="2026-03-14 00:26:03.595213618 +0000 UTC m=+72.760700893" Mar 14 00:26:03.684053 sshd[5852]: Accepted publickey for core from 4.153.228.146 port 50552 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:03.686124 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:03.692769 systemd-logind[1439]: New session 10 of user core. Mar 14 00:26:03.695986 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:26:03.962528 sshd[5852]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:03.968836 systemd[1]: sshd@11-10.128.0.67:22-4.153.228.146:50552.service: Deactivated successfully. Mar 14 00:26:03.972251 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:26:03.973424 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:26:03.975995 systemd-logind[1439]: Removed session 10. Mar 14 00:26:09.013183 systemd[1]: Started sshd@12-10.128.0.67:22-4.153.228.146:43656.service - OpenSSH per-connection server daemon (4.153.228.146:43656). Mar 14 00:26:09.253027 sshd[5896]: Accepted publickey for core from 4.153.228.146 port 43656 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:09.256459 sshd[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:09.267309 systemd-logind[1439]: New session 11 of user core. Mar 14 00:26:09.273015 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:26:09.532036 sshd[5896]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:09.536880 systemd[1]: sshd@12-10.128.0.67:22-4.153.228.146:43656.service: Deactivated successfully. Mar 14 00:26:09.539411 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:26:09.541634 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:26:09.544195 systemd-logind[1439]: Removed session 11. Mar 14 00:26:14.587084 systemd[1]: Started sshd@13-10.128.0.67:22-4.153.228.146:43658.service - OpenSSH per-connection server daemon (4.153.228.146:43658). Mar 14 00:26:15.070873 sshd[5921]: Accepted publickey for core from 4.153.228.146 port 43658 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:15.072825 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:15.080084 systemd-logind[1439]: New session 12 of user core. Mar 14 00:26:15.084938 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:26:15.338054 sshd[5921]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:15.345029 systemd[1]: sshd@13-10.128.0.67:22-4.153.228.146:43658.service: Deactivated successfully. Mar 14 00:26:15.348254 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:26:15.349565 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:26:15.351246 systemd-logind[1439]: Removed session 12. Mar 14 00:26:18.646913 kubelet[2594]: I0314 00:26:18.646243 2594 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:26:20.392747 systemd[1]: Started sshd@14-10.128.0.67:22-4.153.228.146:48358.service - OpenSSH per-connection server daemon (4.153.228.146:48358). Mar 14 00:26:20.648169 sshd[5966]: Accepted publickey for core from 4.153.228.146 port 48358 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:20.649107 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:20.656233 systemd-logind[1439]: New session 13 of user core. Mar 14 00:26:20.660951 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:26:20.944527 sshd[5966]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:20.950821 systemd[1]: sshd@14-10.128.0.67:22-4.153.228.146:48358.service: Deactivated successfully. Mar 14 00:26:20.954366 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:26:20.955582 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:26:20.957337 systemd-logind[1439]: Removed session 13. Mar 14 00:26:26.003355 systemd[1]: Started sshd@15-10.128.0.67:22-4.153.228.146:48364.service - OpenSSH per-connection server daemon (4.153.228.146:48364). Mar 14 00:26:26.260495 sshd[6037]: Accepted publickey for core from 4.153.228.146 port 48364 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:26.262309 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:26.269500 systemd-logind[1439]: New session 14 of user core. Mar 14 00:26:26.278051 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:26:26.531142 sshd[6037]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:26.537128 systemd[1]: sshd@15-10.128.0.67:22-4.153.228.146:48364.service: Deactivated successfully. Mar 14 00:26:26.540188 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:26:26.541654 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:26:26.543897 systemd-logind[1439]: Removed session 14. Mar 14 00:26:26.578623 systemd[1]: Started sshd@16-10.128.0.67:22-4.153.228.146:48376.service - OpenSSH per-connection server daemon (4.153.228.146:48376). Mar 14 00:26:26.822217 sshd[6050]: Accepted publickey for core from 4.153.228.146 port 48376 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:26.824998 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:26.832171 systemd-logind[1439]: New session 15 of user core. Mar 14 00:26:26.837083 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:26:27.139236 sshd[6050]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:27.151189 systemd[1]: sshd@16-10.128.0.67:22-4.153.228.146:48376.service: Deactivated successfully. Mar 14 00:26:27.155424 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:26:27.156688 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:26:27.158832 systemd-logind[1439]: Removed session 15. Mar 14 00:26:27.192742 systemd[1]: Started sshd@17-10.128.0.67:22-4.153.228.146:48388.service - OpenSSH per-connection server daemon (4.153.228.146:48388). Mar 14 00:26:27.464171 sshd[6060]: Accepted publickey for core from 4.153.228.146 port 48388 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:27.465572 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:27.476665 systemd-logind[1439]: New session 16 of user core. Mar 14 00:26:27.482100 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:26:27.771970 sshd[6060]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:27.777549 systemd[1]: sshd@17-10.128.0.67:22-4.153.228.146:48388.service: Deactivated successfully. Mar 14 00:26:27.781167 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:26:27.782539 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:26:27.784493 systemd-logind[1439]: Removed session 16. Mar 14 00:26:32.829181 systemd[1]: Started sshd@18-10.128.0.67:22-4.153.228.146:53960.service - OpenSSH per-connection server daemon (4.153.228.146:53960). Mar 14 00:26:33.094940 sshd[6081]: Accepted publickey for core from 4.153.228.146 port 53960 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:33.097085 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:33.102802 systemd-logind[1439]: New session 17 of user core. Mar 14 00:26:33.108982 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:26:33.379866 sshd[6081]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:33.386350 systemd[1]: sshd@18-10.128.0.67:22-4.153.228.146:53960.service: Deactivated successfully. Mar 14 00:26:33.389253 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:26:33.390809 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:26:33.392843 systemd-logind[1439]: Removed session 17. Mar 14 00:26:38.428209 systemd[1]: Started sshd@19-10.128.0.67:22-4.153.228.146:53962.service - OpenSSH per-connection server daemon (4.153.228.146:53962). Mar 14 00:26:38.673126 sshd[6116]: Accepted publickey for core from 4.153.228.146 port 53962 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:38.675215 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:38.682875 systemd-logind[1439]: New session 18 of user core. Mar 14 00:26:38.687996 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:26:38.939146 sshd[6116]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:38.944160 systemd[1]: sshd@19-10.128.0.67:22-4.153.228.146:53962.service: Deactivated successfully. Mar 14 00:26:38.948384 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:26:38.950752 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:26:38.952830 systemd-logind[1439]: Removed session 18. Mar 14 00:26:38.989175 systemd[1]: Started sshd@20-10.128.0.67:22-4.153.228.146:47116.service - OpenSSH per-connection server daemon (4.153.228.146:47116). Mar 14 00:26:39.232321 sshd[6129]: Accepted publickey for core from 4.153.228.146 port 47116 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:39.234268 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:39.241335 systemd-logind[1439]: New session 19 of user core. Mar 14 00:26:39.247965 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:26:39.600463 sshd[6129]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:39.606274 systemd[1]: sshd@20-10.128.0.67:22-4.153.228.146:47116.service: Deactivated successfully. Mar 14 00:26:39.610611 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:26:39.612267 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:26:39.614227 systemd-logind[1439]: Removed session 19. Mar 14 00:26:39.651203 systemd[1]: Started sshd@21-10.128.0.67:22-4.153.228.146:47132.service - OpenSSH per-connection server daemon (4.153.228.146:47132). Mar 14 00:26:39.891735 sshd[6140]: Accepted publickey for core from 4.153.228.146 port 47132 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:39.894225 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:39.902050 systemd-logind[1439]: New session 20 of user core. Mar 14 00:26:39.908039 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:26:40.826124 sshd[6140]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:40.833543 systemd[1]: sshd@21-10.128.0.67:22-4.153.228.146:47132.service: Deactivated successfully. Mar 14 00:26:40.840221 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:26:40.845340 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:26:40.847509 systemd-logind[1439]: Removed session 20. Mar 14 00:26:40.879864 systemd[1]: Started sshd@22-10.128.0.67:22-4.153.228.146:47148.service - OpenSSH per-connection server daemon (4.153.228.146:47148). Mar 14 00:26:41.129957 sshd[6163]: Accepted publickey for core from 4.153.228.146 port 47148 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:41.134453 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:41.144372 systemd-logind[1439]: New session 21 of user core. Mar 14 00:26:41.149976 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:26:41.589953 sshd[6163]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:41.596132 systemd[1]: sshd@22-10.128.0.67:22-4.153.228.146:47148.service: Deactivated successfully. Mar 14 00:26:41.599116 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:26:41.600329 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:26:41.602979 systemd-logind[1439]: Removed session 21. Mar 14 00:26:41.647174 systemd[1]: Started sshd@23-10.128.0.67:22-4.153.228.146:47156.service - OpenSSH per-connection server daemon (4.153.228.146:47156). Mar 14 00:26:41.929102 sshd[6175]: Accepted publickey for core from 4.153.228.146 port 47156 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:41.931791 sshd[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:41.938976 systemd-logind[1439]: New session 22 of user core. Mar 14 00:26:41.944046 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:26:42.214671 sshd[6175]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:42.221879 systemd[1]: sshd@23-10.128.0.67:22-4.153.228.146:47156.service: Deactivated successfully. Mar 14 00:26:42.225458 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:26:42.227223 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:26:42.229025 systemd-logind[1439]: Removed session 22. Mar 14 00:26:47.264126 systemd[1]: Started sshd@24-10.128.0.67:22-4.153.228.146:47166.service - OpenSSH per-connection server daemon (4.153.228.146:47166). Mar 14 00:26:47.508050 sshd[6190]: Accepted publickey for core from 4.153.228.146 port 47166 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:47.510337 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:47.517327 systemd-logind[1439]: New session 23 of user core. Mar 14 00:26:47.521963 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:26:47.783981 sshd[6190]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:47.789371 systemd[1]: sshd@24-10.128.0.67:22-4.153.228.146:47166.service: Deactivated successfully. Mar 14 00:26:47.793311 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:26:47.795676 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:26:47.797876 systemd-logind[1439]: Removed session 23. Mar 14 00:26:52.835165 systemd[1]: Started sshd@25-10.128.0.67:22-4.153.228.146:52538.service - OpenSSH per-connection server daemon (4.153.228.146:52538). Mar 14 00:26:53.074039 sshd[6245]: Accepted publickey for core from 4.153.228.146 port 52538 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:53.075991 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:53.082376 systemd-logind[1439]: New session 24 of user core. Mar 14 00:26:53.088968 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:26:53.341229 sshd[6245]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:53.346901 systemd[1]: sshd@25-10.128.0.67:22-4.153.228.146:52538.service: Deactivated successfully. Mar 14 00:26:53.350154 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:26:53.352887 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:26:53.354994 systemd-logind[1439]: Removed session 24. Mar 14 00:26:58.401869 systemd[1]: Started sshd@26-10.128.0.67:22-4.153.228.146:52554.service - OpenSSH per-connection server daemon (4.153.228.146:52554). Mar 14 00:26:58.697445 sshd[6290]: Accepted publickey for core from 4.153.228.146 port 52554 ssh2: RSA SHA256:slf66icJXFQ0A6yWLZ1cjDT1QQWqQd3UU4/9rvJvVok Mar 14 00:26:58.698616 sshd[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:58.708450 systemd-logind[1439]: New session 25 of user core. Mar 14 00:26:58.715758 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:26:59.018116 sshd[6290]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:59.025581 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:26:59.029799 systemd[1]: sshd@26-10.128.0.67:22-4.153.228.146:52554.service: Deactivated successfully. Mar 14 00:26:59.034178 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:26:59.036721 systemd-logind[1439]: Removed session 25.