Nov 1 00:28:51.079687 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:28:51.079731 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:28:51.079749 kernel: BIOS-provided physical RAM map: Nov 1 00:28:51.079763 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 1 00:28:51.079776 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 1 00:28:51.079790 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 1 00:28:51.079806 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 1 00:28:51.079825 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 1 00:28:51.079840 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 1 00:28:51.079854 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 1 00:28:51.079869 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 1 00:28:51.079883 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 1 00:28:51.079896 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 1 00:28:51.079911 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 1 00:28:51.079932 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 1 00:28:51.079948 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 1 00:28:51.079963 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 1 00:28:51.079979 kernel: NX (Execute Disable) protection: active Nov 1 00:28:51.079994 kernel: APIC: Static calls initialized Nov 1 00:28:51.080010 kernel: efi: EFI v2.7 by EDK II Nov 1 00:28:51.080060 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 Nov 1 00:28:51.080077 kernel: SMBIOS 2.4 present. Nov 1 00:28:51.080092 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 1 00:28:51.080108 kernel: Hypervisor detected: KVM Nov 1 00:28:51.080127 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:28:51.080142 kernel: kvm-clock: using sched offset of 12731338161 cycles Nov 1 00:28:51.080159 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:28:51.080176 kernel: tsc: Detected 2299.998 MHz processor Nov 1 00:28:51.080192 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:28:51.080215 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:28:51.080231 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 1 00:28:51.080264 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 1 00:28:51.080279 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:28:51.080299 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 1 00:28:51.080314 kernel: Using GB pages for direct mapping Nov 1 00:28:51.080329 kernel: Secure boot disabled Nov 1 00:28:51.080345 kernel: ACPI: Early table checksum verification disabled Nov 1 00:28:51.080361 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 1 00:28:51.080376 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 1 00:28:51.080393 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 1 00:28:51.080417 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 1 00:28:51.080438 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 1 00:28:51.080455 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 1 00:28:51.080471 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 1 00:28:51.080487 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 1 00:28:51.080504 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 1 00:28:51.080521 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 1 00:28:51.080541 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 1 00:28:51.080559 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 1 00:28:51.080576 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 1 00:28:51.080592 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 1 00:28:51.080609 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 1 00:28:51.080627 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 1 00:28:51.080644 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 1 00:28:51.080660 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 1 00:28:51.080677 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 1 00:28:51.080699 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 1 00:28:51.080716 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:28:51.080733 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:28:51.080750 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:28:51.080767 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 1 00:28:51.080784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 1 00:28:51.080801 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 1 00:28:51.080819 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 1 00:28:51.080836 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 1 00:28:51.080857 kernel: Zone ranges: Nov 1 00:28:51.080874 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:28:51.080891 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:28:51.080908 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:28:51.080926 kernel: Movable zone start for each node Nov 1 00:28:51.080942 kernel: Early memory node ranges Nov 1 00:28:51.080959 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 1 00:28:51.080975 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 1 00:28:51.080992 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 1 00:28:51.081013 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 1 00:28:51.081029 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:28:51.081067 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 1 00:28:51.081085 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:28:51.081101 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 1 00:28:51.081118 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 1 00:28:51.081135 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 1 00:28:51.081152 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 1 00:28:51.081170 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 1 00:28:51.081191 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:28:51.081229 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:28:51.081244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:28:51.081259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:28:51.081275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:28:51.081291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:28:51.081307 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:28:51.081324 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:28:51.081341 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 1 00:28:51.081363 kernel: Booting paravirtualized kernel on KVM Nov 1 00:28:51.081380 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:28:51.081398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:28:51.081414 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:28:51.081432 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:28:51.081448 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:28:51.081463 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:28:51.081480 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:28:51.081500 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:28:51.081522 kernel: random: crng init done Nov 1 00:28:51.081539 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 00:28:51.081556 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:28:51.081573 kernel: Fallback order for Node 0: 0 Nov 1 00:28:51.081590 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 1 00:28:51.081608 kernel: Policy zone: Normal Nov 1 00:28:51.081626 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:28:51.081642 kernel: software IO TLB: area num 2. Nov 1 00:28:51.081663 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 346940K reserved, 0K cma-reserved) Nov 1 00:28:51.081680 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:28:51.081697 kernel: Kernel/User page tables isolation: enabled Nov 1 00:28:51.081714 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:28:51.081731 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:28:51.081748 kernel: Dynamic Preempt: voluntary Nov 1 00:28:51.081764 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:28:51.081783 kernel: rcu: RCU event tracing is enabled. Nov 1 00:28:51.081801 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:28:51.081836 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:28:51.081854 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:28:51.081872 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:28:51.081895 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:28:51.081913 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:28:51.081931 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:28:51.081950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:28:51.081968 kernel: Console: colour dummy device 80x25 Nov 1 00:28:51.081990 kernel: printk: console [ttyS0] enabled Nov 1 00:28:51.082008 kernel: ACPI: Core revision 20230628 Nov 1 00:28:51.082026 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:28:51.082063 kernel: x2apic enabled Nov 1 00:28:51.082081 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:28:51.082100 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 1 00:28:51.082118 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:28:51.082137 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 1 00:28:51.082156 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 1 00:28:51.082178 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 1 00:28:51.082196 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:28:51.082222 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 00:28:51.082240 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 00:28:51.082257 kernel: Spectre V2 : Mitigation: IBRS Nov 1 00:28:51.082276 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:28:51.082295 kernel: RETBleed: Mitigation: IBRS Nov 1 00:28:51.082312 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:28:51.082329 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 1 00:28:51.082352 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:28:51.082369 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:28:51.082388 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:28:51.082404 kernel: active return thunk: its_return_thunk Nov 1 00:28:51.082421 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:28:51.082438 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:28:51.082455 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:28:51.082473 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:28:51.082492 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:28:51.082514 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:28:51.082533 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:28:51.082551 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:28:51.082570 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:28:51.082591 kernel: landlock: Up and running. Nov 1 00:28:51.082610 kernel: SELinux: Initializing. Nov 1 00:28:51.082631 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:28:51.082650 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:28:51.082671 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 1 00:28:51.082694 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:28:51.082714 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:28:51.082734 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:28:51.082755 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 1 00:28:51.082775 kernel: signal: max sigframe size: 1776 Nov 1 00:28:51.082795 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:28:51.082815 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:28:51.082834 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:28:51.082854 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:28:51.082881 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:28:51.082901 kernel: .... node #0, CPUs: #1 Nov 1 00:28:51.082960 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 1 00:28:51.083005 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:28:51.083025 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:28:51.083552 kernel: smpboot: Max logical packages: 1 Nov 1 00:28:51.083574 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 1 00:28:51.083594 kernel: devtmpfs: initialized Nov 1 00:28:51.083620 kernel: x86/mm: Memory block size: 128MB Nov 1 00:28:51.083640 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 1 00:28:51.083660 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:28:51.083680 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:28:51.083700 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:28:51.083720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:28:51.083739 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:28:51.083759 kernel: audit: type=2000 audit(1761956929.521:1): state=initialized audit_enabled=0 res=1 Nov 1 00:28:51.083778 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:28:51.083801 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:28:51.083821 kernel: cpuidle: using governor menu Nov 1 00:28:51.083840 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:28:51.083860 kernel: dca service started, version 1.12.1 Nov 1 00:28:51.083879 kernel: PCI: Using configuration type 1 for base access Nov 1 00:28:51.083899 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:28:51.083918 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:28:51.083936 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:28:51.083955 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:28:51.083978 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:28:51.084000 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:28:51.084048 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:28:51.084066 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:28:51.084084 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 1 00:28:51.084099 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:28:51.084115 kernel: ACPI: Interpreter enabled Nov 1 00:28:51.084132 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:28:51.084149 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:28:51.084171 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:28:51.084188 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 1 00:28:51.084215 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 1 00:28:51.084232 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:28:51.084521 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:28:51.084726 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 1 00:28:51.084909 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 1 00:28:51.084937 kernel: PCI host bridge to bus 0000:00 Nov 1 00:28:51.085144 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:28:51.085328 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:28:51.085503 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:28:51.085698 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 1 00:28:51.085861 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:28:51.086099 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:28:51.086316 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 1 00:28:51.086513 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:28:51.086699 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 1 00:28:51.086895 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 1 00:28:51.087109 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 1 00:28:51.087304 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 1 00:28:51.087504 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:28:51.087693 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 1 00:28:51.087877 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 1 00:28:51.088105 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:28:51.088339 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 1 00:28:51.088553 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 1 00:28:51.088582 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:28:51.088612 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:28:51.088631 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:28:51.088649 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:28:51.088670 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:28:51.088690 kernel: iommu: Default domain type: Translated Nov 1 00:28:51.088710 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:28:51.088729 kernel: efivars: Registered efivars operations Nov 1 00:28:51.088746 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:28:51.088765 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:28:51.088793 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 1 00:28:51.088811 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 1 00:28:51.088828 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 1 00:28:51.088845 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 1 00:28:51.088862 kernel: vgaarb: loaded Nov 1 00:28:51.088879 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:28:51.088900 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:28:51.088920 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:28:51.088939 kernel: pnp: PnP ACPI init Nov 1 00:28:51.088961 kernel: pnp: PnP ACPI: found 7 devices Nov 1 00:28:51.088981 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:28:51.089000 kernel: NET: Registered PF_INET protocol family Nov 1 00:28:51.089021 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:28:51.089125 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 00:28:51.089145 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:28:51.089162 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:28:51.089201 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 00:28:51.089254 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 00:28:51.089291 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:28:51.089309 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:28:51.089328 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:28:51.089348 kernel: NET: Registered PF_XDP protocol family Nov 1 00:28:51.090133 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:28:51.090532 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:28:51.090740 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:28:51.090935 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 1 00:28:51.093641 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:28:51.093689 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:28:51.093711 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:28:51.093738 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 1 00:28:51.093760 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:28:51.093783 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:28:51.093805 kernel: clocksource: Switched to clocksource tsc Nov 1 00:28:51.093826 kernel: Initialise system trusted keyrings Nov 1 00:28:51.093859 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 00:28:51.093880 kernel: Key type asymmetric registered Nov 1 00:28:51.093912 kernel: Asymmetric key parser 'x509' registered Nov 1 00:28:51.093940 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:28:51.093962 kernel: io scheduler mq-deadline registered Nov 1 00:28:51.093986 kernel: io scheduler kyber registered Nov 1 00:28:51.094008 kernel: io scheduler bfq registered Nov 1 00:28:51.094029 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:28:51.095619 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:28:51.095884 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 1 00:28:51.095912 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 1 00:28:51.096141 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 1 00:28:51.096167 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:28:51.096359 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 1 00:28:51.096383 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:28:51.096402 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:28:51.096421 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:28:51.096439 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 1 00:28:51.096462 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 1 00:28:51.096703 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 1 00:28:51.096730 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:28:51.096748 kernel: i8042: Warning: Keylock active Nov 1 00:28:51.096766 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:28:51.096785 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:28:51.096999 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 1 00:28:51.099185 kernel: rtc_cmos 00:00: registered as rtc0 Nov 1 00:28:51.099443 kernel: rtc_cmos 00:00: setting system clock to 2025-11-01T00:28:50 UTC (1761956930) Nov 1 00:28:51.099695 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 1 00:28:51.099726 kernel: intel_pstate: CPU model not supported Nov 1 00:28:51.099750 kernel: pstore: Using crash dump compression: deflate Nov 1 00:28:51.099776 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:28:51.099798 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:28:51.099819 kernel: Segment Routing with IPv6 Nov 1 00:28:51.099843 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:28:51.099880 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:28:51.099901 kernel: Key type dns_resolver registered Nov 1 00:28:51.099931 kernel: IPI shorthand broadcast: enabled Nov 1 00:28:51.099956 kernel: sched_clock: Marking stable (828004281, 135471636)->(980024426, -16548509) Nov 1 00:28:51.099978 kernel: registered taskstats version 1 Nov 1 00:28:51.099997 kernel: Loading compiled-in X.509 certificates Nov 1 00:28:51.100021 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:28:51.100070 kernel: Key type .fscrypt registered Nov 1 00:28:51.100089 kernel: Key type fscrypt-provisioning registered Nov 1 00:28:51.100112 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:28:51.100133 kernel: ima: No architecture policies found Nov 1 00:28:51.100152 kernel: clk: Disabling unused clocks Nov 1 00:28:51.100172 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:28:51.100192 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:28:51.100212 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:28:51.100231 kernel: Run /init as init process Nov 1 00:28:51.100251 kernel: with arguments: Nov 1 00:28:51.100270 kernel: /init Nov 1 00:28:51.100293 kernel: with environment: Nov 1 00:28:51.100310 kernel: HOME=/ Nov 1 00:28:51.100328 kernel: TERM=linux Nov 1 00:28:51.100352 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:28:51.100387 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:28:51.100415 systemd[1]: Detected virtualization google. Nov 1 00:28:51.100435 systemd[1]: Detected architecture x86-64. Nov 1 00:28:51.100459 systemd[1]: Running in initrd. Nov 1 00:28:51.100494 systemd[1]: No hostname configured, using default hostname. Nov 1 00:28:51.100519 systemd[1]: Hostname set to . Nov 1 00:28:51.100542 systemd[1]: Initializing machine ID from random generator. Nov 1 00:28:51.100563 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:28:51.100585 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:28:51.100616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:28:51.100636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:28:51.100661 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:28:51.100681 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:28:51.100703 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:28:51.100729 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:28:51.100764 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:28:51.100788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:28:51.100811 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:28:51.100834 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:28:51.100855 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:28:51.100906 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:28:51.100930 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:28:51.100949 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:28:51.100971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:28:51.101002 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:28:51.101023 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:28:51.103125 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:28:51.103157 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:28:51.103192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:28:51.103216 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:28:51.103238 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:28:51.103261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:28:51.103284 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:28:51.103315 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:28:51.103338 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:28:51.103360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:28:51.103382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:28:51.103404 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:28:51.103473 systemd-journald[183]: Collecting audit messages is disabled. Nov 1 00:28:51.103526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:28:51.103549 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:28:51.103573 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:28:51.103602 systemd-journald[183]: Journal started Nov 1 00:28:51.103642 systemd-journald[183]: Runtime Journal (/run/log/journal/34362c14c0f34393aaf0ad64de9a0578) is 8.0M, max 148.7M, 140.7M free. Nov 1 00:28:51.091255 systemd-modules-load[184]: Inserted module 'overlay' Nov 1 00:28:51.116062 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:28:51.126246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:28:51.138267 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:28:51.144087 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:28:51.147764 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 1 00:28:51.152168 kernel: Bridge firewalling registered Nov 1 00:28:51.148790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:28:51.159258 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:28:51.173282 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:28:51.175237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:28:51.183259 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:28:51.195701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:28:51.208048 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:28:51.214942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:28:51.219564 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:28:51.227250 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:28:51.242245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:28:51.258200 dracut-cmdline[216]: dracut-dracut-053 Nov 1 00:28:51.263376 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:28:51.306394 systemd-resolved[217]: Positive Trust Anchors: Nov 1 00:28:51.306925 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:28:51.306993 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:28:51.312265 systemd-resolved[217]: Defaulting to hostname 'linux'. Nov 1 00:28:51.313923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:28:51.332692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:28:51.374080 kernel: SCSI subsystem initialized Nov 1 00:28:51.385074 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:28:51.398077 kernel: iscsi: registered transport (tcp) Nov 1 00:28:51.422440 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:28:51.422502 kernel: QLogic iSCSI HBA Driver Nov 1 00:28:51.475071 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:28:51.483250 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:28:51.519428 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:28:51.519501 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:28:51.519530 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:28:51.565071 kernel: raid6: avx2x4 gen() 18200 MB/s Nov 1 00:28:51.582068 kernel: raid6: avx2x2 gen() 18292 MB/s Nov 1 00:28:51.599487 kernel: raid6: avx2x1 gen() 14326 MB/s Nov 1 00:28:51.599527 kernel: raid6: using algorithm avx2x2 gen() 18292 MB/s Nov 1 00:28:51.617463 kernel: raid6: .... xor() 17747 MB/s, rmw enabled Nov 1 00:28:51.617520 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:28:51.641074 kernel: xor: automatically using best checksumming function avx Nov 1 00:28:51.819073 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:28:51.832882 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:28:51.841138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:28:51.864386 systemd-udevd[400]: Using default interface naming scheme 'v255'. Nov 1 00:28:51.871282 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:28:51.881239 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:28:51.911741 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 1 00:28:51.951306 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:28:51.957287 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:28:52.059126 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:28:52.071208 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:28:52.109324 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:28:52.120866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:28:52.129201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:28:52.133614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:28:52.152403 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:28:52.197065 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:28:52.202240 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:28:52.207202 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 1 00:28:52.237056 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:28:52.250398 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:28:52.251380 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:28:52.265346 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:28:52.268653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:28:52.268884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:28:52.281586 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:28:52.281738 kernel: AES CTR mode by8 optimization enabled Nov 1 00:28:52.273480 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:28:52.290482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:28:52.305726 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 1 00:28:52.306098 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 1 00:28:52.306370 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 1 00:28:52.306607 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 1 00:28:52.310019 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:28:52.323179 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:28:52.323228 kernel: GPT:17805311 != 33554431 Nov 1 00:28:52.323253 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:28:52.323277 kernel: GPT:17805311 != 33554431 Nov 1 00:28:52.323307 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:28:52.323359 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:52.326276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:28:52.331246 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 1 00:28:52.340264 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:28:52.377476 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:28:52.400732 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (446) Nov 1 00:28:52.405535 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 1 00:28:52.410326 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (442) Nov 1 00:28:52.433893 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 1 00:28:52.444081 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 1 00:28:52.444310 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 1 00:28:52.460902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 1 00:28:52.473263 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:28:52.486204 disk-uuid[548]: Primary Header is updated. Nov 1 00:28:52.486204 disk-uuid[548]: Secondary Entries is updated. Nov 1 00:28:52.486204 disk-uuid[548]: Secondary Header is updated. Nov 1 00:28:52.497158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:52.517074 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:52.525126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:53.525085 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:53.527288 disk-uuid[549]: The operation has completed successfully. Nov 1 00:28:53.596823 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:28:53.596999 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:28:53.638251 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:28:53.668702 sh[566]: Success Nov 1 00:28:53.693090 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:28:53.777797 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:28:53.784623 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:28:53.808634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:28:53.847098 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:28:53.847172 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:28:53.870307 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:28:53.870367 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:28:53.870404 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:28:53.908072 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:28:53.916543 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:28:53.917566 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:28:53.923274 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:28:53.942231 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:28:53.993271 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:53.993331 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:28:53.993359 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:28:54.018390 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:28:54.018461 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:28:54.042565 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:54.042104 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:28:54.063161 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:28:54.078322 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:28:54.173513 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:28:54.184243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:28:54.288576 ignition[671]: Ignition 2.19.0 Nov 1 00:28:54.291941 systemd-networkd[748]: lo: Link UP Nov 1 00:28:54.289017 ignition[671]: Stage: fetch-offline Nov 1 00:28:54.291949 systemd-networkd[748]: lo: Gained carrier Nov 1 00:28:54.289126 ignition[671]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:54.292296 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:28:54.289144 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:54.293835 systemd-networkd[748]: Enumeration completed Nov 1 00:28:54.289325 ignition[671]: parsed url from cmdline: "" Nov 1 00:28:54.294723 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:28:54.289332 ignition[671]: no config URL provided Nov 1 00:28:54.294731 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:28:54.289343 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:28:54.296583 systemd-networkd[748]: eth0: Link UP Nov 1 00:28:54.289358 ignition[671]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:28:54.296591 systemd-networkd[748]: eth0: Gained carrier Nov 1 00:28:54.289369 ignition[671]: failed to fetch config: resource requires networking Nov 1 00:28:54.296604 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:28:54.289669 ignition[671]: Ignition finished successfully Nov 1 00:28:54.307116 systemd-networkd[748]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:28:54.394443 ignition[757]: Ignition 2.19.0 Nov 1 00:28:54.307133 systemd-networkd[748]: eth0: DHCPv4 address 10.128.0.44/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 1 00:28:54.394452 ignition[757]: Stage: fetch Nov 1 00:28:54.313373 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:28:54.394642 ignition[757]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:54.330433 systemd[1]: Reached target network.target - Network. Nov 1 00:28:54.394654 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:54.349286 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:28:54.394771 ignition[757]: parsed url from cmdline: "" Nov 1 00:28:54.404618 unknown[757]: fetched base config from "system" Nov 1 00:28:54.394778 ignition[757]: no config URL provided Nov 1 00:28:54.404631 unknown[757]: fetched base config from "system" Nov 1 00:28:54.394788 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:28:54.404643 unknown[757]: fetched user config from "gcp" Nov 1 00:28:54.394799 ignition[757]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:28:54.408652 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:28:54.394823 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 1 00:28:54.431259 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:28:54.397991 ignition[757]: GET result: OK Nov 1 00:28:54.499304 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:28:54.398132 ignition[757]: parsing config with SHA512: 3ce23d037ceb8dbb81dadb4fc7ef59700d1cb2e8142e1e24027d908cf796d40579072887050797542339c76692e41fc76ae9434e85aff53d8585addc20838653 Nov 1 00:28:54.507238 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:28:54.406640 ignition[757]: fetch: fetch complete Nov 1 00:28:54.546680 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:28:54.406651 ignition[757]: fetch: fetch passed Nov 1 00:28:54.566963 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:28:54.406730 ignition[757]: Ignition finished successfully Nov 1 00:28:54.591292 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:28:54.496639 ignition[763]: Ignition 2.19.0 Nov 1 00:28:54.601354 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:28:54.496648 ignition[763]: Stage: kargs Nov 1 00:28:54.629274 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:28:54.496871 ignition[763]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:54.637382 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:28:54.496884 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:54.662339 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:28:54.498104 ignition[763]: kargs: kargs passed Nov 1 00:28:54.498164 ignition[763]: Ignition finished successfully Nov 1 00:28:54.539589 ignition[768]: Ignition 2.19.0 Nov 1 00:28:54.539598 ignition[768]: Stage: disks Nov 1 00:28:54.539818 ignition[768]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:54.539837 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:54.541171 ignition[768]: disks: disks passed Nov 1 00:28:54.541228 ignition[768]: Ignition finished successfully Nov 1 00:28:54.719721 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 1 00:28:54.905164 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:28:54.910168 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:28:55.067076 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:28:55.067738 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:28:55.068599 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:28:55.088162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:28:55.115470 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:28:55.140093 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (785) Nov 1 00:28:55.141534 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:28:55.163925 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:55.163974 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:28:55.164001 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:28:55.141628 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:28:55.218243 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:28:55.218289 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:28:55.141671 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:28:55.202169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:28:55.226442 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:28:55.250283 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:28:55.389896 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:28:55.400747 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:28:55.410134 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:28:55.420206 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:28:55.558454 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:28:55.575175 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:28:55.596072 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:55.609242 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:28:55.618205 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:28:55.663768 ignition[897]: INFO : Ignition 2.19.0 Nov 1 00:28:55.663768 ignition[897]: INFO : Stage: mount Nov 1 00:28:55.668227 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:28:55.697331 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:55.697331 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:55.697331 ignition[897]: INFO : mount: mount passed Nov 1 00:28:55.697331 ignition[897]: INFO : Ignition finished successfully Nov 1 00:28:55.691327 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:28:55.713181 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:28:55.765371 systemd-networkd[748]: eth0: Gained IPv6LL Nov 1 00:28:55.769279 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:28:55.806074 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (910) Nov 1 00:28:55.806132 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:55.822142 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:28:55.822190 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:28:55.843840 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:28:55.843901 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:28:55.846948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:28:55.883306 ignition[927]: INFO : Ignition 2.19.0 Nov 1 00:28:55.883306 ignition[927]: INFO : Stage: files Nov 1 00:28:55.898195 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:55.898195 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:55.898195 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:28:55.898195 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:28:55.898195 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:28:55.898195 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:28:55.898195 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:28:55.898195 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:28:55.898195 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:28:55.898195 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:28:55.894761 unknown[927]: wrote ssh authorized keys file for user: core Nov 1 00:28:56.089714 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:28:56.352536 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:28:56.352536 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:28:56.856767 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:28:57.677749 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:28:57.677749 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:28:57.717309 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:28:57.717309 ignition[927]: INFO : files: files passed Nov 1 00:28:57.717309 ignition[927]: INFO : Ignition finished successfully Nov 1 00:28:57.682261 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:28:57.713465 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:28:57.733556 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:28:57.772720 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:28:57.927260 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:28:57.927260 initrd-setup-root-after-ignition[954]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:28:57.772853 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:28:57.994303 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:28:57.793755 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:28:57.815654 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:28:57.846327 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:28:57.921416 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:28:57.921540 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:28:57.938332 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:28:57.962234 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:28:57.983310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:28:57.990247 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:28:58.044324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:28:58.071255 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:28:58.105235 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:28:58.117419 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:28:58.127553 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:28:58.149494 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:28:58.149686 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:28:58.206371 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:28:58.216511 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:28:58.233564 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:28:58.248514 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:28:58.267531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:28:58.285578 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:28:58.302526 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:28:58.320568 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:28:58.340581 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:28:58.358578 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:28:58.375421 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:28:58.375644 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:28:58.424388 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:28:58.432485 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:28:58.450437 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:28:58.450621 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:28:58.469501 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:28:58.469696 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:28:58.525358 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:28:58.525591 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:28:58.535594 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:28:58.535770 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:28:58.562458 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:28:58.601279 ignition[979]: INFO : Ignition 2.19.0 Nov 1 00:28:58.601279 ignition[979]: INFO : Stage: umount Nov 1 00:28:58.601279 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:58.601279 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:58.601279 ignition[979]: INFO : umount: umount passed Nov 1 00:28:58.601279 ignition[979]: INFO : Ignition finished successfully Nov 1 00:28:58.609178 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:28:58.609445 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:28:58.623284 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:28:58.683192 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:28:58.683494 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:28:58.705476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:28:58.705710 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:28:58.738389 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:28:58.739709 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:28:58.739827 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:28:58.744828 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:28:58.744937 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:28:58.763822 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:28:58.763959 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:28:58.780366 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:28:58.780436 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:28:58.797494 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:28:58.797562 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:28:58.814451 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:28:58.814520 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:28:58.831438 systemd[1]: Stopped target network.target - Network. Nov 1 00:28:58.848334 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:28:58.848413 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:28:58.863393 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:28:58.889184 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:28:58.894124 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:28:58.900352 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:28:58.918438 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:28:58.933465 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:28:58.933535 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:28:58.948439 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:28:58.948513 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:28:58.965421 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:28:58.965498 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:28:58.982482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:28:58.982558 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:28:58.999460 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:28:58.999534 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:28:59.016664 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:28:59.021113 systemd-networkd[748]: eth0: DHCPv6 lease lost Nov 1 00:28:59.044390 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:28:59.064718 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:28:59.064851 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:28:59.074005 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:28:59.074278 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:28:59.091966 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:28:59.092024 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:28:59.118167 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:28:59.137119 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:28:59.137214 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:28:59.149221 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:28:59.149301 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:28:59.167258 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:28:59.167381 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:28:59.185233 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:28:59.185343 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:28:59.204359 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:28:59.217498 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:28:59.217720 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:28:59.242557 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:28:59.619174 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 1 00:28:59.242666 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:28:59.260307 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:28:59.260356 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:28:59.278295 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:28:59.278362 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:28:59.322258 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:28:59.322359 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:28:59.359265 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:28:59.359375 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:28:59.410213 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:28:59.421305 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:28:59.421395 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:28:59.457453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:28:59.457542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:28:59.486917 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:28:59.487068 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:28:59.496737 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:28:59.496849 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:28:59.514781 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:28:59.537239 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:28:59.572058 systemd[1]: Switching root. Nov 1 00:28:59.822147 systemd-journald[183]: Journal stopped Nov 1 00:28:51.079687 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:28:51.079731 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:28:51.079749 kernel: BIOS-provided physical RAM map: Nov 1 00:28:51.079763 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 1 00:28:51.079776 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 1 00:28:51.079790 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 1 00:28:51.079806 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 1 00:28:51.079825 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 1 00:28:51.079840 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 1 00:28:51.079854 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 1 00:28:51.079869 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 1 00:28:51.079883 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 1 00:28:51.079896 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 1 00:28:51.079911 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 1 00:28:51.079932 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 1 00:28:51.079948 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 1 00:28:51.079963 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 1 00:28:51.079979 kernel: NX (Execute Disable) protection: active Nov 1 00:28:51.079994 kernel: APIC: Static calls initialized Nov 1 00:28:51.080010 kernel: efi: EFI v2.7 by EDK II Nov 1 00:28:51.080060 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 Nov 1 00:28:51.080077 kernel: SMBIOS 2.4 present. Nov 1 00:28:51.080092 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 1 00:28:51.080108 kernel: Hypervisor detected: KVM Nov 1 00:28:51.080127 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:28:51.080142 kernel: kvm-clock: using sched offset of 12731338161 cycles Nov 1 00:28:51.080159 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:28:51.080176 kernel: tsc: Detected 2299.998 MHz processor Nov 1 00:28:51.080192 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:28:51.080215 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:28:51.080231 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 1 00:28:51.080264 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 1 00:28:51.080279 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:28:51.080299 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 1 00:28:51.080314 kernel: Using GB pages for direct mapping Nov 1 00:28:51.080329 kernel: Secure boot disabled Nov 1 00:28:51.080345 kernel: ACPI: Early table checksum verification disabled Nov 1 00:28:51.080361 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 1 00:28:51.080376 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 1 00:28:51.080393 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 1 00:28:51.080417 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 1 00:28:51.080438 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 1 00:28:51.080455 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 1 00:28:51.080471 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 1 00:28:51.080487 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 1 00:28:51.080504 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 1 00:28:51.080521 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 1 00:28:51.080541 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 1 00:28:51.080559 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 1 00:28:51.080576 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 1 00:28:51.080592 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 1 00:28:51.080609 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 1 00:28:51.080627 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 1 00:28:51.080644 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 1 00:28:51.080660 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 1 00:28:51.080677 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 1 00:28:51.080699 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 1 00:28:51.080716 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:28:51.080733 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:28:51.080750 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:28:51.080767 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 1 00:28:51.080784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 1 00:28:51.080801 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 1 00:28:51.080819 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 1 00:28:51.080836 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 1 00:28:51.080857 kernel: Zone ranges: Nov 1 00:28:51.080874 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:28:51.080891 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:28:51.080908 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:28:51.080926 kernel: Movable zone start for each node Nov 1 00:28:51.080942 kernel: Early memory node ranges Nov 1 00:28:51.080959 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 1 00:28:51.080975 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 1 00:28:51.080992 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 1 00:28:51.081013 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 1 00:28:51.081029 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:28:51.081067 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 1 00:28:51.081085 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:28:51.081101 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 1 00:28:51.081118 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 1 00:28:51.081135 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 1 00:28:51.081152 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 1 00:28:51.081170 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 1 00:28:51.081191 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:28:51.081229 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:28:51.081244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:28:51.081259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:28:51.081275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:28:51.081291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:28:51.081307 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:28:51.081324 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:28:51.081341 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 1 00:28:51.081363 kernel: Booting paravirtualized kernel on KVM Nov 1 00:28:51.081380 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:28:51.081398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:28:51.081414 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:28:51.081432 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:28:51.081448 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:28:51.081463 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:28:51.081480 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:28:51.081500 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:28:51.081522 kernel: random: crng init done Nov 1 00:28:51.081539 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 00:28:51.081556 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:28:51.081573 kernel: Fallback order for Node 0: 0 Nov 1 00:28:51.081590 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 1 00:28:51.081608 kernel: Policy zone: Normal Nov 1 00:28:51.081626 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:28:51.081642 kernel: software IO TLB: area num 2. Nov 1 00:28:51.081663 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 346940K reserved, 0K cma-reserved) Nov 1 00:28:51.081680 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:28:51.081697 kernel: Kernel/User page tables isolation: enabled Nov 1 00:28:51.081714 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:28:51.081731 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:28:51.081748 kernel: Dynamic Preempt: voluntary Nov 1 00:28:51.081764 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:28:51.081783 kernel: rcu: RCU event tracing is enabled. Nov 1 00:28:51.081801 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:28:51.081836 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:28:51.081854 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:28:51.081872 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:28:51.081895 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:28:51.081913 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:28:51.081931 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:28:51.081950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:28:51.081968 kernel: Console: colour dummy device 80x25 Nov 1 00:28:51.081990 kernel: printk: console [ttyS0] enabled Nov 1 00:28:51.082008 kernel: ACPI: Core revision 20230628 Nov 1 00:28:51.082026 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:28:51.082063 kernel: x2apic enabled Nov 1 00:28:51.082081 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:28:51.082100 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 1 00:28:51.082118 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:28:51.082137 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 1 00:28:51.082156 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 1 00:28:51.082178 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 1 00:28:51.082196 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:28:51.082222 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 00:28:51.082240 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 00:28:51.082257 kernel: Spectre V2 : Mitigation: IBRS Nov 1 00:28:51.082276 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:28:51.082295 kernel: RETBleed: Mitigation: IBRS Nov 1 00:28:51.082312 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:28:51.082329 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 1 00:28:51.082352 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:28:51.082369 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:28:51.082388 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:28:51.082404 kernel: active return thunk: its_return_thunk Nov 1 00:28:51.082421 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:28:51.082438 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:28:51.082455 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:28:51.082473 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:28:51.082492 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:28:51.082514 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:28:51.082533 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:28:51.082551 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:28:51.082570 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:28:51.082591 kernel: landlock: Up and running. Nov 1 00:28:51.082610 kernel: SELinux: Initializing. Nov 1 00:28:51.082631 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:28:51.082650 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:28:51.082671 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 1 00:28:51.082694 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:28:51.082714 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:28:51.082734 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:28:51.082755 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 1 00:28:51.082775 kernel: signal: max sigframe size: 1776 Nov 1 00:28:51.082795 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:28:51.082815 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:28:51.082834 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:28:51.082854 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:28:51.082881 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:28:51.082901 kernel: .... node #0, CPUs: #1 Nov 1 00:28:51.082960 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 1 00:28:51.083005 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:28:51.083025 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:28:51.083552 kernel: smpboot: Max logical packages: 1 Nov 1 00:28:51.083574 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 1 00:28:51.083594 kernel: devtmpfs: initialized Nov 1 00:28:51.083620 kernel: x86/mm: Memory block size: 128MB Nov 1 00:28:51.083640 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 1 00:28:51.083660 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:28:51.083680 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:28:51.083700 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:28:51.083720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:28:51.083739 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:28:51.083759 kernel: audit: type=2000 audit(1761956929.521:1): state=initialized audit_enabled=0 res=1 Nov 1 00:28:51.083778 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:28:51.083801 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:28:51.083821 kernel: cpuidle: using governor menu Nov 1 00:28:51.083840 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:28:51.083860 kernel: dca service started, version 1.12.1 Nov 1 00:28:51.083879 kernel: PCI: Using configuration type 1 for base access Nov 1 00:28:51.083899 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:28:51.083918 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:28:51.083936 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:28:51.083955 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:28:51.083978 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:28:51.084000 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:28:51.084048 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:28:51.084066 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:28:51.084084 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 1 00:28:51.084099 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:28:51.084115 kernel: ACPI: Interpreter enabled Nov 1 00:28:51.084132 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:28:51.084149 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:28:51.084171 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:28:51.084188 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 1 00:28:51.084215 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 1 00:28:51.084232 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:28:51.084521 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:28:51.084726 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 1 00:28:51.084909 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 1 00:28:51.084937 kernel: PCI host bridge to bus 0000:00 Nov 1 00:28:51.085144 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:28:51.085328 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:28:51.085503 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:28:51.085698 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 1 00:28:51.085861 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:28:51.086099 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:28:51.086316 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 1 00:28:51.086513 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:28:51.086699 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 1 00:28:51.086895 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 1 00:28:51.087109 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 1 00:28:51.087304 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 1 00:28:51.087504 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:28:51.087693 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 1 00:28:51.087877 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 1 00:28:51.088105 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:28:51.088339 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 1 00:28:51.088553 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 1 00:28:51.088582 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:28:51.088612 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:28:51.088631 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:28:51.088649 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:28:51.088670 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:28:51.088690 kernel: iommu: Default domain type: Translated Nov 1 00:28:51.088710 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:28:51.088729 kernel: efivars: Registered efivars operations Nov 1 00:28:51.088746 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:28:51.088765 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:28:51.088793 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 1 00:28:51.088811 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 1 00:28:51.088828 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 1 00:28:51.088845 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 1 00:28:51.088862 kernel: vgaarb: loaded Nov 1 00:28:51.088879 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:28:51.088900 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:28:51.088920 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:28:51.088939 kernel: pnp: PnP ACPI init Nov 1 00:28:51.088961 kernel: pnp: PnP ACPI: found 7 devices Nov 1 00:28:51.088981 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:28:51.089000 kernel: NET: Registered PF_INET protocol family Nov 1 00:28:51.089021 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:28:51.089125 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 00:28:51.089145 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:28:51.089162 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:28:51.089201 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 00:28:51.089254 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 00:28:51.089291 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:28:51.089309 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:28:51.089328 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:28:51.089348 kernel: NET: Registered PF_XDP protocol family Nov 1 00:28:51.090133 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:28:51.090532 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:28:51.090740 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:28:51.090935 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 1 00:28:51.093641 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:28:51.093689 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:28:51.093711 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:28:51.093738 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 1 00:28:51.093760 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:28:51.093783 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:28:51.093805 kernel: clocksource: Switched to clocksource tsc Nov 1 00:28:51.093826 kernel: Initialise system trusted keyrings Nov 1 00:28:51.093859 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 00:28:51.093880 kernel: Key type asymmetric registered Nov 1 00:28:51.093912 kernel: Asymmetric key parser 'x509' registered Nov 1 00:28:51.093940 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:28:51.093962 kernel: io scheduler mq-deadline registered Nov 1 00:28:51.093986 kernel: io scheduler kyber registered Nov 1 00:28:51.094008 kernel: io scheduler bfq registered Nov 1 00:28:51.094029 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:28:51.095619 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:28:51.095884 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 1 00:28:51.095912 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 1 00:28:51.096141 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 1 00:28:51.096167 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:28:51.096359 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 1 00:28:51.096383 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:28:51.096402 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:28:51.096421 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:28:51.096439 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 1 00:28:51.096462 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 1 00:28:51.096703 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 1 00:28:51.096730 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:28:51.096748 kernel: i8042: Warning: Keylock active Nov 1 00:28:51.096766 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:28:51.096785 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:28:51.096999 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 1 00:28:51.099185 kernel: rtc_cmos 00:00: registered as rtc0 Nov 1 00:28:51.099443 kernel: rtc_cmos 00:00: setting system clock to 2025-11-01T00:28:50 UTC (1761956930) Nov 1 00:28:51.099695 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 1 00:28:51.099726 kernel: intel_pstate: CPU model not supported Nov 1 00:28:51.099750 kernel: pstore: Using crash dump compression: deflate Nov 1 00:28:51.099776 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:28:51.099798 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:28:51.099819 kernel: Segment Routing with IPv6 Nov 1 00:28:51.099843 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:28:51.099880 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:28:51.099901 kernel: Key type dns_resolver registered Nov 1 00:28:51.099931 kernel: IPI shorthand broadcast: enabled Nov 1 00:28:51.099956 kernel: sched_clock: Marking stable (828004281, 135471636)->(980024426, -16548509) Nov 1 00:28:51.099978 kernel: registered taskstats version 1 Nov 1 00:28:51.099997 kernel: Loading compiled-in X.509 certificates Nov 1 00:28:51.100021 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:28:51.100070 kernel: Key type .fscrypt registered Nov 1 00:28:51.100089 kernel: Key type fscrypt-provisioning registered Nov 1 00:28:51.100112 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:28:51.100133 kernel: ima: No architecture policies found Nov 1 00:28:51.100152 kernel: clk: Disabling unused clocks Nov 1 00:28:51.100172 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:28:51.100192 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:28:51.100212 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:28:51.100231 kernel: Run /init as init process Nov 1 00:28:51.100251 kernel: with arguments: Nov 1 00:28:51.100270 kernel: /init Nov 1 00:28:51.100293 kernel: with environment: Nov 1 00:28:51.100310 kernel: HOME=/ Nov 1 00:28:51.100328 kernel: TERM=linux Nov 1 00:28:51.100352 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:28:51.100387 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:28:51.100415 systemd[1]: Detected virtualization google. Nov 1 00:28:51.100435 systemd[1]: Detected architecture x86-64. Nov 1 00:28:51.100459 systemd[1]: Running in initrd. Nov 1 00:28:51.100494 systemd[1]: No hostname configured, using default hostname. Nov 1 00:28:51.100519 systemd[1]: Hostname set to . Nov 1 00:28:51.100542 systemd[1]: Initializing machine ID from random generator. Nov 1 00:28:51.100563 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:28:51.100585 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:28:51.100616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:28:51.100636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:28:51.100661 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:28:51.100681 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:28:51.100703 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:28:51.100729 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:28:51.100764 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:28:51.100788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:28:51.100811 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:28:51.100834 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:28:51.100855 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:28:51.100906 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:28:51.100930 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:28:51.100949 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:28:51.100971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:28:51.101002 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:28:51.101023 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:28:51.103125 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:28:51.103157 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:28:51.103192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:28:51.103216 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:28:51.103238 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:28:51.103261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:28:51.103284 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:28:51.103315 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:28:51.103338 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:28:51.103360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:28:51.103382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:28:51.103404 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:28:51.103473 systemd-journald[183]: Collecting audit messages is disabled. Nov 1 00:28:51.103526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:28:51.103549 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:28:51.103573 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:28:51.103602 systemd-journald[183]: Journal started Nov 1 00:28:51.103642 systemd-journald[183]: Runtime Journal (/run/log/journal/34362c14c0f34393aaf0ad64de9a0578) is 8.0M, max 148.7M, 140.7M free. Nov 1 00:28:51.091255 systemd-modules-load[184]: Inserted module 'overlay' Nov 1 00:28:51.116062 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:28:51.126246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:28:51.138267 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:28:51.144087 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:28:51.147764 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 1 00:28:51.152168 kernel: Bridge firewalling registered Nov 1 00:28:51.148790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:28:51.159258 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:28:51.173282 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:28:51.175237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:28:51.183259 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:28:51.195701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:28:51.208048 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:28:51.214942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:28:51.219564 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:28:51.227250 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:28:51.242245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:28:51.258200 dracut-cmdline[216]: dracut-dracut-053 Nov 1 00:28:51.263376 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:28:51.306394 systemd-resolved[217]: Positive Trust Anchors: Nov 1 00:28:51.306925 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:28:51.306993 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:28:51.312265 systemd-resolved[217]: Defaulting to hostname 'linux'. Nov 1 00:28:51.313923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:28:51.332692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:28:51.374080 kernel: SCSI subsystem initialized Nov 1 00:28:51.385074 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:28:51.398077 kernel: iscsi: registered transport (tcp) Nov 1 00:28:51.422440 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:28:51.422502 kernel: QLogic iSCSI HBA Driver Nov 1 00:28:51.475071 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:28:51.483250 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:28:51.519428 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:28:51.519501 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:28:51.519530 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:28:51.565071 kernel: raid6: avx2x4 gen() 18200 MB/s Nov 1 00:28:51.582068 kernel: raid6: avx2x2 gen() 18292 MB/s Nov 1 00:28:51.599487 kernel: raid6: avx2x1 gen() 14326 MB/s Nov 1 00:28:51.599527 kernel: raid6: using algorithm avx2x2 gen() 18292 MB/s Nov 1 00:28:51.617463 kernel: raid6: .... xor() 17747 MB/s, rmw enabled Nov 1 00:28:51.617520 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:28:51.641074 kernel: xor: automatically using best checksumming function avx Nov 1 00:28:51.819073 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:28:51.832882 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:28:51.841138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:28:51.864386 systemd-udevd[400]: Using default interface naming scheme 'v255'. Nov 1 00:28:51.871282 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:28:51.881239 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:28:51.911741 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 1 00:28:51.951306 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:28:51.957287 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:28:52.059126 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:28:52.071208 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:28:52.109324 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:28:52.120866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:28:52.129201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:28:52.133614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:28:52.152403 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:28:52.197065 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:28:52.202240 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:28:52.207202 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 1 00:28:52.237056 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:28:52.250398 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:28:52.251380 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:28:52.265346 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:28:52.268653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:28:52.268884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:28:52.281586 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:28:52.281738 kernel: AES CTR mode by8 optimization enabled Nov 1 00:28:52.273480 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:28:52.290482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:28:52.305726 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 1 00:28:52.306098 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 1 00:28:52.306370 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 1 00:28:52.306607 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 1 00:28:52.310019 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:28:52.323179 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:28:52.323228 kernel: GPT:17805311 != 33554431 Nov 1 00:28:52.323253 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:28:52.323277 kernel: GPT:17805311 != 33554431 Nov 1 00:28:52.323307 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:28:52.323359 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:52.326276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:28:52.331246 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 1 00:28:52.340264 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:28:52.377476 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:28:52.400732 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (446) Nov 1 00:28:52.405535 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 1 00:28:52.410326 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (442) Nov 1 00:28:52.433893 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 1 00:28:52.444081 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 1 00:28:52.444310 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 1 00:28:52.460902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 1 00:28:52.473263 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:28:52.486204 disk-uuid[548]: Primary Header is updated. Nov 1 00:28:52.486204 disk-uuid[548]: Secondary Entries is updated. Nov 1 00:28:52.486204 disk-uuid[548]: Secondary Header is updated. Nov 1 00:28:52.497158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:52.517074 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:52.525126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:53.525085 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:28:53.527288 disk-uuid[549]: The operation has completed successfully. Nov 1 00:28:53.596823 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:28:53.596999 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:28:53.638251 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:28:53.668702 sh[566]: Success Nov 1 00:28:53.693090 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:28:53.777797 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:28:53.784623 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:28:53.808634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:28:53.847098 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:28:53.847172 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:28:53.870307 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:28:53.870367 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:28:53.870404 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:28:53.908072 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:28:53.916543 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:28:53.917566 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:28:53.923274 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:28:53.942231 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:28:53.993271 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:53.993331 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:28:53.993359 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:28:54.018390 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:28:54.018461 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:28:54.042565 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:54.042104 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:28:54.063161 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:28:54.078322 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:28:54.173513 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:28:54.184243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:28:54.288576 ignition[671]: Ignition 2.19.0 Nov 1 00:28:54.291941 systemd-networkd[748]: lo: Link UP Nov 1 00:28:54.289017 ignition[671]: Stage: fetch-offline Nov 1 00:28:54.291949 systemd-networkd[748]: lo: Gained carrier Nov 1 00:28:54.289126 ignition[671]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:54.292296 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:28:54.289144 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:54.293835 systemd-networkd[748]: Enumeration completed Nov 1 00:28:54.289325 ignition[671]: parsed url from cmdline: "" Nov 1 00:28:54.294723 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:28:54.289332 ignition[671]: no config URL provided Nov 1 00:28:54.294731 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:28:54.289343 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:28:54.296583 systemd-networkd[748]: eth0: Link UP Nov 1 00:28:54.289358 ignition[671]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:28:54.296591 systemd-networkd[748]: eth0: Gained carrier Nov 1 00:28:54.289369 ignition[671]: failed to fetch config: resource requires networking Nov 1 00:28:54.296604 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:28:54.289669 ignition[671]: Ignition finished successfully Nov 1 00:28:54.307116 systemd-networkd[748]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:28:54.394443 ignition[757]: Ignition 2.19.0 Nov 1 00:28:54.307133 systemd-networkd[748]: eth0: DHCPv4 address 10.128.0.44/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 1 00:28:54.394452 ignition[757]: Stage: fetch Nov 1 00:28:54.313373 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:28:54.394642 ignition[757]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:54.330433 systemd[1]: Reached target network.target - Network. Nov 1 00:28:54.394654 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:54.349286 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:28:54.394771 ignition[757]: parsed url from cmdline: "" Nov 1 00:28:54.404618 unknown[757]: fetched base config from "system" Nov 1 00:28:54.394778 ignition[757]: no config URL provided Nov 1 00:28:54.404631 unknown[757]: fetched base config from "system" Nov 1 00:28:54.394788 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:28:54.404643 unknown[757]: fetched user config from "gcp" Nov 1 00:28:54.394799 ignition[757]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:28:54.408652 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:28:54.394823 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 1 00:28:54.431259 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:28:54.397991 ignition[757]: GET result: OK Nov 1 00:28:54.499304 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:28:54.398132 ignition[757]: parsing config with SHA512: 3ce23d037ceb8dbb81dadb4fc7ef59700d1cb2e8142e1e24027d908cf796d40579072887050797542339c76692e41fc76ae9434e85aff53d8585addc20838653 Nov 1 00:28:54.507238 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:28:54.406640 ignition[757]: fetch: fetch complete Nov 1 00:28:54.546680 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:28:54.406651 ignition[757]: fetch: fetch passed Nov 1 00:28:54.566963 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:28:54.406730 ignition[757]: Ignition finished successfully Nov 1 00:28:54.591292 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:28:54.496639 ignition[763]: Ignition 2.19.0 Nov 1 00:28:54.601354 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:28:54.496648 ignition[763]: Stage: kargs Nov 1 00:28:54.629274 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:28:54.496871 ignition[763]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:54.637382 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:28:54.496884 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:54.662339 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:28:54.498104 ignition[763]: kargs: kargs passed Nov 1 00:28:54.498164 ignition[763]: Ignition finished successfully Nov 1 00:28:54.539589 ignition[768]: Ignition 2.19.0 Nov 1 00:28:54.539598 ignition[768]: Stage: disks Nov 1 00:28:54.539818 ignition[768]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:54.539837 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:54.541171 ignition[768]: disks: disks passed Nov 1 00:28:54.541228 ignition[768]: Ignition finished successfully Nov 1 00:28:54.719721 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 1 00:28:54.905164 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:28:54.910168 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:28:55.067076 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:28:55.067738 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:28:55.068599 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:28:55.088162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:28:55.115470 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:28:55.140093 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (785) Nov 1 00:28:55.141534 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:28:55.163925 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:55.163974 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:28:55.164001 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:28:55.141628 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:28:55.218243 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:28:55.218289 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:28:55.141671 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:28:55.202169 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:28:55.226442 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:28:55.250283 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:28:55.389896 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:28:55.400747 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:28:55.410134 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:28:55.420206 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:28:55.558454 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:28:55.575175 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:28:55.596072 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:55.609242 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:28:55.618205 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:28:55.663768 ignition[897]: INFO : Ignition 2.19.0 Nov 1 00:28:55.663768 ignition[897]: INFO : Stage: mount Nov 1 00:28:55.668227 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:28:55.697331 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:55.697331 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:55.697331 ignition[897]: INFO : mount: mount passed Nov 1 00:28:55.697331 ignition[897]: INFO : Ignition finished successfully Nov 1 00:28:55.691327 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:28:55.713181 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:28:55.765371 systemd-networkd[748]: eth0: Gained IPv6LL Nov 1 00:28:55.769279 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:28:55.806074 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (910) Nov 1 00:28:55.806132 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:28:55.822142 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:28:55.822190 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:28:55.843840 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:28:55.843901 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:28:55.846948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:28:55.883306 ignition[927]: INFO : Ignition 2.19.0 Nov 1 00:28:55.883306 ignition[927]: INFO : Stage: files Nov 1 00:28:55.898195 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:55.898195 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:55.898195 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:28:55.898195 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:28:55.898195 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:28:55.898195 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:28:55.898195 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:28:55.898195 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:28:55.898195 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:28:55.898195 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:28:55.894761 unknown[927]: wrote ssh authorized keys file for user: core Nov 1 00:28:56.089714 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:28:56.352536 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:28:56.352536 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:28:56.384172 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:28:56.856767 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:28:57.677749 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:28:57.677749 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:28:57.717309 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:28:57.717309 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:28:57.717309 ignition[927]: INFO : files: files passed Nov 1 00:28:57.717309 ignition[927]: INFO : Ignition finished successfully Nov 1 00:28:57.682261 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:28:57.713465 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:28:57.733556 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:28:57.772720 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:28:57.927260 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:28:57.927260 initrd-setup-root-after-ignition[954]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:28:57.772853 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:28:57.994303 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:28:57.793755 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:28:57.815654 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:28:57.846327 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:28:57.921416 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:28:57.921540 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:28:57.938332 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:28:57.962234 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:28:57.983310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:28:57.990247 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:28:58.044324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:28:58.071255 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:28:58.105235 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:28:58.117419 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:28:58.127553 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:28:58.149494 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:28:58.149686 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:28:58.206371 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:28:58.216511 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:28:58.233564 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:28:58.248514 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:28:58.267531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:28:58.285578 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:28:58.302526 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:28:58.320568 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:28:58.340581 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:28:58.358578 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:28:58.375421 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:28:58.375644 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:28:58.424388 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:28:58.432485 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:28:58.450437 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:28:58.450621 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:28:58.469501 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:28:58.469696 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:28:58.525358 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:28:58.525591 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:28:58.535594 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:28:58.535770 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:28:58.562458 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:28:58.601279 ignition[979]: INFO : Ignition 2.19.0 Nov 1 00:28:58.601279 ignition[979]: INFO : Stage: umount Nov 1 00:28:58.601279 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:28:58.601279 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:28:58.601279 ignition[979]: INFO : umount: umount passed Nov 1 00:28:58.601279 ignition[979]: INFO : Ignition finished successfully Nov 1 00:28:58.609178 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:28:58.609445 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:28:58.623284 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:28:58.683192 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:28:58.683494 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:28:58.705476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:28:58.705710 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:28:58.738389 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:28:58.739709 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:28:58.739827 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:28:58.744828 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:28:58.744937 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:28:58.763822 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:28:58.763959 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:28:58.780366 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:28:58.780436 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:28:58.797494 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:28:58.797562 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:28:58.814451 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:28:58.814520 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:28:58.831438 systemd[1]: Stopped target network.target - Network. Nov 1 00:28:58.848334 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:28:58.848413 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:28:58.863393 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:28:58.889184 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:28:58.894124 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:28:58.900352 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:28:58.918438 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:28:58.933465 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:28:58.933535 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:28:58.948439 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:28:58.948513 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:28:58.965421 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:28:58.965498 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:28:58.982482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:28:58.982558 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:28:58.999460 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:28:58.999534 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:28:59.016664 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:28:59.021113 systemd-networkd[748]: eth0: DHCPv6 lease lost Nov 1 00:28:59.044390 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:28:59.064718 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:28:59.064851 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:28:59.074005 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:28:59.074278 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:28:59.091966 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:28:59.092024 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:28:59.118167 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:28:59.137119 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:28:59.137214 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:28:59.149221 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:28:59.149301 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:28:59.167258 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:28:59.167381 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:28:59.185233 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:28:59.185343 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:28:59.204359 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:28:59.217498 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:28:59.217720 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:28:59.242557 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:28:59.619174 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 1 00:28:59.242666 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:28:59.260307 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:28:59.260356 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:28:59.278295 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:28:59.278362 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:28:59.322258 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:28:59.322359 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:28:59.359265 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:28:59.359375 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:28:59.410213 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:28:59.421305 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:28:59.421395 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:28:59.457453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:28:59.457542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:28:59.486917 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:28:59.487068 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:28:59.496737 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:28:59.496849 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:28:59.514781 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:28:59.537239 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:28:59.572058 systemd[1]: Switching root. Nov 1 00:28:59.822147 systemd-journald[183]: Journal stopped Nov 1 00:29:02.342578 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:29:02.342615 kernel: SELinux: policy capability open_perms=1 Nov 1 00:29:02.342630 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:29:02.342641 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:29:02.342657 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:29:02.342674 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:29:02.342696 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:29:02.342720 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:29:02.342738 kernel: audit: type=1403 audit(1761956940.246:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:29:02.342761 systemd[1]: Successfully loaded SELinux policy in 80.830ms. Nov 1 00:29:02.342783 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.725ms. Nov 1 00:29:02.342801 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:29:02.342813 systemd[1]: Detected virtualization google. Nov 1 00:29:02.342826 systemd[1]: Detected architecture x86-64. Nov 1 00:29:02.342843 systemd[1]: Detected first boot. Nov 1 00:29:02.342859 systemd[1]: Initializing machine ID from random generator. Nov 1 00:29:02.342872 zram_generator::config[1020]: No configuration found. Nov 1 00:29:02.342886 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:29:02.342899 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:29:02.342915 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:29:02.342928 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:29:02.342942 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:29:02.342954 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:29:02.342967 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:29:02.342981 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:29:02.342994 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:29:02.343011 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:29:02.343024 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:29:02.343068 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:29:02.343084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:29:02.343097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:29:02.343111 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:29:02.343124 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:29:02.343137 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:29:02.343155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:29:02.343169 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:29:02.343182 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:29:02.343195 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:29:02.343208 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:29:02.343221 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:29:02.343239 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:29:02.343253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:29:02.343266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:29:02.343284 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:29:02.343298 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:29:02.343314 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:29:02.343328 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:29:02.343341 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:29:02.343355 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:29:02.343368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:29:02.343385 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:29:02.343399 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:29:02.343413 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:29:02.343426 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:29:02.343440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:29:02.343457 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:29:02.343471 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:29:02.343485 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:29:02.343499 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:29:02.343512 systemd[1]: Reached target machines.target - Containers. Nov 1 00:29:02.343526 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:29:02.343540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:29:02.343554 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:29:02.343570 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:29:02.343585 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:29:02.343599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:29:02.343613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:29:02.343626 kernel: ACPI: bus type drm_connector registered Nov 1 00:29:02.343639 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:29:02.343653 kernel: fuse: init (API version 7.39) Nov 1 00:29:02.343665 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:29:02.343682 kernel: loop: module loaded Nov 1 00:29:02.343695 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:29:02.343709 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:29:02.343723 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:29:02.343737 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:29:02.343750 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:29:02.343764 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:29:02.343783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:29:02.343797 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:29:02.343814 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:29:02.343853 systemd-journald[1107]: Collecting audit messages is disabled. Nov 1 00:29:02.343881 systemd-journald[1107]: Journal started Nov 1 00:29:02.343910 systemd-journald[1107]: Runtime Journal (/run/log/journal/c756a15985034b188e651580e5632061) is 8.0M, max 148.7M, 140.7M free. Nov 1 00:29:01.101503 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:29:01.121788 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 1 00:29:01.122492 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:29:02.377643 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:29:02.377726 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:29:02.377756 systemd[1]: Stopped verity-setup.service. Nov 1 00:29:02.409072 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:29:02.421100 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:29:02.431718 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:29:02.441391 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:29:02.451401 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:29:02.461392 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:29:02.471374 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:29:02.481380 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:29:02.492602 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:29:02.504641 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:29:02.516707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:29:02.516965 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:29:02.528514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:29:02.528742 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:29:02.540513 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:29:02.540775 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:29:02.550516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:29:02.550736 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:29:02.562522 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:29:02.562752 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:29:02.572492 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:29:02.572780 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:29:02.582499 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:29:02.592469 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:29:02.604529 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:29:02.616535 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:29:02.641706 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:29:02.657194 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:29:02.677633 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:29:02.687204 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:29:02.687419 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:29:02.698497 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:29:02.722301 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:29:02.734450 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:29:02.744344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:29:02.751642 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:29:02.767741 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:29:02.779206 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:29:02.786186 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:29:02.796205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:29:02.810262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:29:02.812291 systemd-journald[1107]: Time spent on flushing to /var/log/journal/c756a15985034b188e651580e5632061 is 88.483ms for 927 entries. Nov 1 00:29:02.812291 systemd-journald[1107]: System Journal (/var/log/journal/c756a15985034b188e651580e5632061) is 8.0M, max 584.8M, 576.8M free. Nov 1 00:29:02.937117 systemd-journald[1107]: Received client request to flush runtime journal. Nov 1 00:29:02.937181 kernel: loop0: detected capacity change from 0 to 54824 Nov 1 00:29:02.837830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:29:02.861224 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:29:02.878851 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:29:02.895946 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:29:02.907356 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:29:02.918519 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:29:02.932525 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:29:02.949808 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:29:02.961059 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:29:02.984429 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:29:03.018087 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:29:03.019237 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:29:03.037451 udevadm[1140]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:29:03.058435 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:29:03.076867 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:29:03.075676 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:29:03.076632 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:29:03.095733 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:29:03.154074 kernel: loop2: detected capacity change from 0 to 142488 Nov 1 00:29:03.191519 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Nov 1 00:29:03.194300 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Nov 1 00:29:03.208677 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:29:03.277081 kernel: loop3: detected capacity change from 0 to 140768 Nov 1 00:29:03.386073 kernel: loop4: detected capacity change from 0 to 54824 Nov 1 00:29:03.416070 kernel: loop5: detected capacity change from 0 to 224512 Nov 1 00:29:03.461326 kernel: loop6: detected capacity change from 0 to 142488 Nov 1 00:29:03.520075 kernel: loop7: detected capacity change from 0 to 140768 Nov 1 00:29:03.572663 (sd-merge)[1163]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Nov 1 00:29:03.573815 (sd-merge)[1163]: Merged extensions into '/usr'. Nov 1 00:29:03.582583 systemd[1]: Reloading requested from client PID 1138 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:29:03.583046 systemd[1]: Reloading... Nov 1 00:29:03.713140 zram_generator::config[1185]: No configuration found. Nov 1 00:29:03.987999 ldconfig[1133]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:29:04.030716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:29:04.122996 systemd[1]: Reloading finished in 538 ms. Nov 1 00:29:04.152734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:29:04.162781 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:29:04.175588 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:29:04.203283 systemd[1]: Starting ensure-sysext.service... Nov 1 00:29:04.215252 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:29:04.235296 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:29:04.253260 systemd[1]: Reloading requested from client PID 1230 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:29:04.253285 systemd[1]: Reloading... Nov 1 00:29:04.259605 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:29:04.260424 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:29:04.263353 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:29:04.264166 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Nov 1 00:29:04.264463 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Nov 1 00:29:04.277511 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:29:04.277701 systemd-tmpfiles[1231]: Skipping /boot Nov 1 00:29:04.312468 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:29:04.312499 systemd-tmpfiles[1231]: Skipping /boot Nov 1 00:29:04.354382 systemd-udevd[1232]: Using default interface naming scheme 'v255'. Nov 1 00:29:04.416340 zram_generator::config[1254]: No configuration found. Nov 1 00:29:04.688070 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1296) Nov 1 00:29:04.714838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:29:04.808458 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 1 00:29:04.820519 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 00:29:04.847826 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:29:04.848009 systemd[1]: Reloading finished in 592 ms. Nov 1 00:29:04.878061 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 00:29:04.881033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:29:04.892603 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:29:04.907150 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:29:04.944079 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Nov 1 00:29:04.988179 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:29:04.999073 kernel: ACPI: button: Sleep Button [SLPF] Nov 1 00:29:05.001465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:29:05.007060 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:29:05.014378 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:29:05.035109 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:29:05.046383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:29:05.054584 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:29:05.071316 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:29:05.092287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:29:05.103337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:29:05.108075 augenrules[1351]: No rules Nov 1 00:29:05.110473 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:29:05.128548 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:29:05.146639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:29:05.165130 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:29:05.177027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:29:05.186449 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:29:05.197978 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:29:05.209937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:29:05.210172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:29:05.221962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:29:05.222198 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:29:05.233968 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:29:05.234224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:29:05.245091 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:29:05.279023 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 1 00:29:05.295179 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:29:05.306935 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:29:05.321430 systemd[1]: Finished ensure-sysext.service. Nov 1 00:29:05.334228 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:29:05.334510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:29:05.339251 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:29:05.357570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:29:05.375865 lvm[1369]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:29:05.378009 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:29:05.396273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:29:05.414309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:29:05.430275 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 1 00:29:05.438289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:29:05.440170 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:29:05.451217 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:29:05.465559 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:29:05.473326 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:29:05.493255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:29:05.503186 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:29:05.503404 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:29:05.506113 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:29:05.517716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:29:05.517948 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:29:05.529629 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:29:05.530568 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:29:05.531145 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:29:05.531359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:29:05.531981 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:29:05.532227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:29:05.537631 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:29:05.539178 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:29:05.550446 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:29:05.560149 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:29:05.569715 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:29:05.569820 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:29:05.569914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:29:05.570514 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 1 00:29:05.583236 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Nov 1 00:29:05.613092 lvm[1396]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:29:05.671593 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:29:05.683755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:29:05.693228 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Nov 1 00:29:05.716273 systemd-networkd[1357]: lo: Link UP Nov 1 00:29:05.716290 systemd-networkd[1357]: lo: Gained carrier Nov 1 00:29:05.718612 systemd-networkd[1357]: Enumeration completed Nov 1 00:29:05.718768 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:29:05.719483 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:29:05.719490 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:29:05.720247 systemd-networkd[1357]: eth0: Link UP Nov 1 00:29:05.720254 systemd-networkd[1357]: eth0: Gained carrier Nov 1 00:29:05.720276 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:29:05.731192 systemd-networkd[1357]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:29:05.731218 systemd-networkd[1357]: eth0: DHCPv4 address 10.128.0.44/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 1 00:29:05.735291 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:29:05.739450 systemd-resolved[1358]: Positive Trust Anchors: Nov 1 00:29:05.739473 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:29:05.739532 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:29:05.745466 systemd-resolved[1358]: Defaulting to hostname 'linux'. Nov 1 00:29:05.753250 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:29:05.763329 systemd[1]: Reached target network.target - Network. Nov 1 00:29:05.772162 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:29:05.783187 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:29:05.793318 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:29:05.804201 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:29:05.815354 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:29:05.825296 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:29:05.836159 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:29:05.847172 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:29:05.847232 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:29:05.855150 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:29:05.863921 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:29:05.875791 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:29:05.896921 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:29:05.907957 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:29:05.918288 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:29:05.928151 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:29:05.936193 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:29:05.936244 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:29:05.945173 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:29:05.956856 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:29:05.973739 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:29:05.990191 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:29:06.013377 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:29:06.023156 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:29:06.025278 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:29:06.036567 jq[1422]: false Nov 1 00:29:06.042252 systemd[1]: Started ntpd.service - Network Time Service. Nov 1 00:29:06.058172 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:29:06.077701 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:29:06.083539 coreos-metadata[1420]: Nov 01 00:29:06.083 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Nov 1 00:29:06.089863 coreos-metadata[1420]: Nov 01 00:29:06.088 INFO Fetch successful Nov 1 00:29:06.089863 coreos-metadata[1420]: Nov 01 00:29:06.088 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Nov 1 00:29:06.089863 coreos-metadata[1420]: Nov 01 00:29:06.088 INFO Fetch successful Nov 1 00:29:06.089863 coreos-metadata[1420]: Nov 01 00:29:06.088 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Nov 1 00:29:06.094197 coreos-metadata[1420]: Nov 01 00:29:06.092 INFO Fetch successful Nov 1 00:29:06.094197 coreos-metadata[1420]: Nov 01 00:29:06.092 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Nov 1 00:29:06.094197 coreos-metadata[1420]: Nov 01 00:29:06.093 INFO Fetch successful Nov 1 00:29:06.092213 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:29:06.111614 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:29:06.115435 dbus-daemon[1421]: [system] SELinux support is enabled Nov 1 00:29:06.121780 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Nov 1 00:29:06.124073 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:29:06.129143 dbus-daemon[1421]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1357 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:29:06.131261 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:29:06.141070 extend-filesystems[1423]: Found loop4 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found loop5 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found loop6 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found loop7 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found sda Nov 1 00:29:06.141070 extend-filesystems[1423]: Found sda1 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found sda2 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found sda3 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found usr Nov 1 00:29:06.141070 extend-filesystems[1423]: Found sda4 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found sda6 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found sda7 Nov 1 00:29:06.141070 extend-filesystems[1423]: Found sda9 Nov 1 00:29:06.141070 extend-filesystems[1423]: Checking size of /dev/sda9 Nov 1 00:29:06.296411 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Nov 1 00:29:06.151136 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: ntpd 4.2.8p17@1.4004-o Fri Oct 31 22:05:56 UTC 2025 (1): Starting Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: ---------------------------------------------------- Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: ntp-4 is maintained by Network Time Foundation, Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: corporation. Support and training for ntp-4 are Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: available at https://www.nwtime.org/support Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: ---------------------------------------------------- Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: proto: precision = 0.072 usec (-24) Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: basedate set to 2025-10-19 Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: gps base set to 2025-10-19 (week 2389) Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: Listen and drop on 0 v6wildcard [::]:123 Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: Listen normally on 2 lo 127.0.0.1:123 Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: Listen normally on 3 eth0 10.128.0.44:123 Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: Listen normally on 4 lo [::1]:123 Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: bind(21) AF_INET6 fe80::4001:aff:fe80:2c%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:2c%2#123 Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:2c%2 Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: Listening on routing socket on fd #21 for interface updates Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:29:06.296748 ntpd[1427]: 1 Nov 00:29:06 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:29:06.171920 ntpd[1427]: ntpd 4.2.8p17@1.4004-o Fri Oct 31 22:05:56 UTC 2025 (1): Starting Nov 1 00:29:06.336440 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Nov 1 00:29:06.336510 extend-filesystems[1423]: Resized partition /dev/sda9 Nov 1 00:29:06.169784 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:29:06.171959 ntpd[1427]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 1 00:29:06.349439 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:29:06.349439 extend-filesystems[1453]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 1 00:29:06.349439 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 1 00:29:06.349439 extend-filesystems[1453]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Nov 1 00:29:06.399450 update_engine[1439]: I20251101 00:29:06.176663 1439 main.cc:92] Flatcar Update Engine starting Nov 1 00:29:06.399450 update_engine[1439]: I20251101 00:29:06.185344 1439 update_check_scheduler.cc:74] Next update check in 5m57s Nov 1 00:29:06.206637 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:29:06.171975 ntpd[1427]: ---------------------------------------------------- Nov 1 00:29:06.400397 extend-filesystems[1423]: Resized filesystem in /dev/sda9 Nov 1 00:29:06.436243 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1266) Nov 1 00:29:06.436295 jq[1443]: true Nov 1 00:29:06.206930 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:29:06.171989 ntpd[1427]: ntp-4 is maintained by Network Time Foundation, Nov 1 00:29:06.207419 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:29:06.172004 ntpd[1427]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 1 00:29:06.210097 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:29:06.172017 ntpd[1427]: corporation. Support and training for ntp-4 are Nov 1 00:29:06.233540 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:29:06.172046 ntpd[1427]: available at https://www.nwtime.org/support Nov 1 00:29:06.233792 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:29:06.172071 ntpd[1427]: ---------------------------------------------------- Nov 1 00:29:06.443463 jq[1456]: true Nov 1 00:29:06.295758 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:29:06.176500 ntpd[1427]: proto: precision = 0.072 usec (-24) Nov 1 00:29:06.317655 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:29:06.179564 ntpd[1427]: basedate set to 2025-10-19 Nov 1 00:29:06.329574 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:29:06.179588 ntpd[1427]: gps base set to 2025-10-19 (week 2389) Nov 1 00:29:06.329619 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:29:06.189260 ntpd[1427]: Listen and drop on 0 v6wildcard [::]:123 Nov 1 00:29:06.357288 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 1 00:29:06.189321 ntpd[1427]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 1 00:29:06.371018 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:29:06.190590 ntpd[1427]: Listen normally on 2 lo 127.0.0.1:123 Nov 1 00:29:06.371127 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:29:06.190666 ntpd[1427]: Listen normally on 3 eth0 10.128.0.44:123 Nov 1 00:29:06.389563 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:29:06.190740 ntpd[1427]: Listen normally on 4 lo [::1]:123 Nov 1 00:29:06.389600 systemd-logind[1438]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 1 00:29:06.190827 ntpd[1427]: bind(21) AF_INET6 fe80::4001:aff:fe80:2c%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:29:06.389648 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:29:06.190861 ntpd[1427]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:2c%2#123 Nov 1 00:29:06.393714 systemd-logind[1438]: New seat seat0. Nov 1 00:29:06.190887 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:2c%2 Nov 1 00:29:06.430710 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:29:06.190950 ntpd[1427]: Listening on routing socket on fd #21 for interface updates Nov 1 00:29:06.447622 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:29:06.214478 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:29:06.217142 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:29:06.295332 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:29:06.458267 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:29:06.458584 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:29:06.509153 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:29:06.584456 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:29:06.597122 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:29:06.603069 tar[1454]: linux-amd64/LICENSE Nov 1 00:29:06.614072 tar[1454]: linux-amd64/helm Nov 1 00:29:06.645756 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:29:06.645983 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 1 00:29:06.680782 dbus-daemon[1421]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1467 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:29:06.692374 systemd[1]: Starting polkit.service - Authorization Manager... Nov 1 00:29:06.699221 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:29:06.702110 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:29:06.727640 systemd[1]: Starting sshkeys.service... Nov 1 00:29:06.791724 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:29:06.813561 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:29:06.817582 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:29:06.819097 polkitd[1492]: Started polkitd version 121 Nov 1 00:29:06.836517 polkitd[1492]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:29:06.836625 polkitd[1492]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:29:06.843091 polkitd[1492]: Finished loading, compiling and executing 2 rules Nov 1 00:29:06.846308 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:29:06.846523 systemd[1]: Started polkit.service - Authorization Manager. Nov 1 00:29:06.849733 polkitd[1492]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:29:06.910616 systemd-hostnamed[1467]: Hostname set to (transient) Nov 1 00:29:06.911390 systemd-resolved[1358]: System hostname changed to 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84'. Nov 1 00:29:06.959745 coreos-metadata[1501]: Nov 01 00:29:06.958 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Nov 1 00:29:06.961518 coreos-metadata[1501]: Nov 01 00:29:06.960 INFO Fetch failed with 404: resource not found Nov 1 00:29:06.961518 coreos-metadata[1501]: Nov 01 00:29:06.960 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Nov 1 00:29:06.961950 coreos-metadata[1501]: Nov 01 00:29:06.961 INFO Fetch successful Nov 1 00:29:06.961950 coreos-metadata[1501]: Nov 01 00:29:06.961 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Nov 1 00:29:06.962235 coreos-metadata[1501]: Nov 01 00:29:06.962 INFO Fetch failed with 404: resource not found Nov 1 00:29:06.962450 coreos-metadata[1501]: Nov 01 00:29:06.962 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Nov 1 00:29:06.962825 coreos-metadata[1501]: Nov 01 00:29:06.962 INFO Fetch failed with 404: resource not found Nov 1 00:29:06.962980 coreos-metadata[1501]: Nov 01 00:29:06.962 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Nov 1 00:29:06.965523 coreos-metadata[1501]: Nov 01 00:29:06.964 INFO Fetch successful Nov 1 00:29:06.970124 unknown[1501]: wrote ssh authorized keys file for user: core Nov 1 00:29:07.018705 update-ssh-keys[1516]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:29:07.020355 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:29:07.029252 systemd-networkd[1357]: eth0: Gained IPv6LL Nov 1 00:29:07.035113 systemd[1]: Finished sshkeys.service. Nov 1 00:29:07.042803 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:29:07.057417 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:29:07.078286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:29:07.098137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:29:07.113408 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Nov 1 00:29:07.159216 init.sh[1522]: + '[' -e /etc/default/instance_configs.cfg.template ']' Nov 1 00:29:07.160276 init.sh[1522]: + echo -e '[InstanceSetup]\nset_host_keys = false' Nov 1 00:29:07.160276 init.sh[1522]: + /usr/bin/google_instance_setup Nov 1 00:29:07.213817 containerd[1462]: time="2025-11-01T00:29:07.210463022Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:29:07.219421 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:29:07.327068 containerd[1462]: time="2025-11-01T00:29:07.326081718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.332437057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.332485971Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.332513688Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.332749075Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.332776145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.332859258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.332880988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.333260028Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.333293202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.333318625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:29:07.333835 containerd[1462]: time="2025-11-01T00:29:07.333336932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:29:07.334432 containerd[1462]: time="2025-11-01T00:29:07.333462695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:29:07.334432 containerd[1462]: time="2025-11-01T00:29:07.333779028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:29:07.337426 containerd[1462]: time="2025-11-01T00:29:07.336797509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:29:07.337426 containerd[1462]: time="2025-11-01T00:29:07.336838512Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:29:07.337426 containerd[1462]: time="2025-11-01T00:29:07.336998716Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:29:07.337426 containerd[1462]: time="2025-11-01T00:29:07.337098846Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:29:07.343058 containerd[1462]: time="2025-11-01T00:29:07.343003256Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:29:07.343261 containerd[1462]: time="2025-11-01T00:29:07.343212475Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:29:07.343477 containerd[1462]: time="2025-11-01T00:29:07.343453039Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:29:07.343664 containerd[1462]: time="2025-11-01T00:29:07.343621595Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:29:07.343793 containerd[1462]: time="2025-11-01T00:29:07.343769972Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:29:07.344829 containerd[1462]: time="2025-11-01T00:29:07.344160610Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346429646Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346616317Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346644908Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346670393Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346697170Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346721656Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346743245Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346766177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346790577Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346812064Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346833396Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346856481Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:29:07.346957 containerd[1462]: time="2025-11-01T00:29:07.346888385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.346912038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347615791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347667215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347694009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347731285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347754161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347776796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347798114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347823041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347842719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347865232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347888859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347914475Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347949318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.348658 containerd[1462]: time="2025-11-01T00:29:07.347982575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.351456 containerd[1462]: time="2025-11-01T00:29:07.348002757Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:29:07.351456 containerd[1462]: time="2025-11-01T00:29:07.350081942Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:29:07.351456 containerd[1462]: time="2025-11-01T00:29:07.350210019Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:29:07.351456 containerd[1462]: time="2025-11-01T00:29:07.350234081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:29:07.351456 containerd[1462]: time="2025-11-01T00:29:07.350257200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:29:07.351456 containerd[1462]: time="2025-11-01T00:29:07.350276977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.351456 containerd[1462]: time="2025-11-01T00:29:07.350306185Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:29:07.351456 containerd[1462]: time="2025-11-01T00:29:07.350323363Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:29:07.351456 containerd[1462]: time="2025-11-01T00:29:07.350341288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:29:07.351939 containerd[1462]: time="2025-11-01T00:29:07.350808705Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:29:07.351939 containerd[1462]: time="2025-11-01T00:29:07.350919145Z" level=info msg="Connect containerd service" Nov 1 00:29:07.351939 containerd[1462]: time="2025-11-01T00:29:07.350978823Z" level=info msg="using legacy CRI server" Nov 1 00:29:07.351939 containerd[1462]: time="2025-11-01T00:29:07.350991797Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:29:07.354058 containerd[1462]: time="2025-11-01T00:29:07.353027098Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:29:07.355532 containerd[1462]: time="2025-11-01T00:29:07.355497288Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:29:07.358609 containerd[1462]: time="2025-11-01T00:29:07.358110211Z" level=info msg="Start subscribing containerd event" Nov 1 00:29:07.358609 containerd[1462]: time="2025-11-01T00:29:07.358205576Z" level=info msg="Start recovering state" Nov 1 00:29:07.358609 containerd[1462]: time="2025-11-01T00:29:07.358295793Z" level=info msg="Start event monitor" Nov 1 00:29:07.358609 containerd[1462]: time="2025-11-01T00:29:07.358320247Z" level=info msg="Start snapshots syncer" Nov 1 00:29:07.358609 containerd[1462]: time="2025-11-01T00:29:07.358335170Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:29:07.358609 containerd[1462]: time="2025-11-01T00:29:07.358349839Z" level=info msg="Start streaming server" Nov 1 00:29:07.360409 containerd[1462]: time="2025-11-01T00:29:07.359437200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:29:07.360409 containerd[1462]: time="2025-11-01T00:29:07.359527826Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:29:07.360283 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:29:07.366531 containerd[1462]: time="2025-11-01T00:29:07.364265638Z" level=info msg="containerd successfully booted in 0.159113s" Nov 1 00:29:08.208793 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:29:08.253066 tar[1454]: linux-amd64/README.md Nov 1 00:29:08.266425 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:29:08.275114 instance-setup[1527]: INFO Running google_set_multiqueue. Nov 1 00:29:08.287320 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:29:08.304139 systemd[1]: Started sshd@0-10.128.0.44:22-147.75.109.163:53358.service - OpenSSH per-connection server daemon (147.75.109.163:53358). Nov 1 00:29:08.310690 instance-setup[1527]: INFO Set channels for eth0 to 2. Nov 1 00:29:08.317189 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:29:08.322814 instance-setup[1527]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Nov 1 00:29:08.328283 instance-setup[1527]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Nov 1 00:29:08.328552 instance-setup[1527]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Nov 1 00:29:08.329442 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:29:08.329731 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:29:08.335665 instance-setup[1527]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Nov 1 00:29:08.335902 instance-setup[1527]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Nov 1 00:29:08.348442 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:29:08.349602 instance-setup[1527]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Nov 1 00:29:08.349655 instance-setup[1527]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Nov 1 00:29:08.352976 instance-setup[1527]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Nov 1 00:29:08.368889 instance-setup[1527]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 1 00:29:08.375145 instance-setup[1527]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 1 00:29:08.377437 instance-setup[1527]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Nov 1 00:29:08.377605 instance-setup[1527]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Nov 1 00:29:08.394721 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:29:08.403663 init.sh[1522]: + /usr/bin/google_metadata_script_runner --script-type startup Nov 1 00:29:08.415530 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:29:08.434110 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:29:08.445775 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:29:08.582947 startup-script[1584]: INFO Starting startup scripts. Nov 1 00:29:08.588986 startup-script[1584]: INFO No startup scripts found in metadata. Nov 1 00:29:08.589093 startup-script[1584]: INFO Finished running startup scripts. Nov 1 00:29:08.615226 init.sh[1522]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Nov 1 00:29:08.615380 init.sh[1522]: + daemon_pids=() Nov 1 00:29:08.615380 init.sh[1522]: + for d in accounts clock_skew network Nov 1 00:29:08.615644 init.sh[1522]: + daemon_pids+=($!) Nov 1 00:29:08.615703 init.sh[1522]: + for d in accounts clock_skew network Nov 1 00:29:08.616278 init.sh[1522]: + daemon_pids+=($!) Nov 1 00:29:08.616278 init.sh[1522]: + for d in accounts clock_skew network Nov 1 00:29:08.616397 init.sh[1522]: + daemon_pids+=($!) Nov 1 00:29:08.616456 init.sh[1522]: + NOTIFY_SOCKET=/run/systemd/notify Nov 1 00:29:08.616456 init.sh[1522]: + /usr/bin/systemd-notify --ready Nov 1 00:29:08.616780 init.sh[1590]: + /usr/bin/google_clock_skew_daemon Nov 1 00:29:08.617674 init.sh[1589]: + /usr/bin/google_accounts_daemon Nov 1 00:29:08.619286 init.sh[1591]: + /usr/bin/google_network_daemon Nov 1 00:29:08.646642 systemd[1]: Started oem-gce.service - GCE Linux Agent. Nov 1 00:29:08.657778 init.sh[1522]: + wait -n 1589 1590 1591 Nov 1 00:29:08.693078 sshd[1561]: Accepted publickey for core from 147.75.109.163 port 53358 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:29:08.695017 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:29:08.719231 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:29:08.737935 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:29:08.756318 systemd-logind[1438]: New session 1 of user core. Nov 1 00:29:08.797123 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:29:08.820494 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:29:08.862704 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:29:09.151961 systemd[1595]: Queued start job for default target default.target. Nov 1 00:29:09.159699 systemd[1595]: Created slice app.slice - User Application Slice. Nov 1 00:29:09.159995 google-clock-skew[1590]: INFO Starting Google Clock Skew daemon. Nov 1 00:29:09.159802 systemd[1595]: Reached target paths.target - Paths. Nov 1 00:29:09.159827 systemd[1595]: Reached target timers.target - Timers. Nov 1 00:29:09.165201 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:29:09.173328 ntpd[1427]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:2c%2]:123 Nov 1 00:29:09.175627 ntpd[1427]: 1 Nov 00:29:09 ntpd[1427]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:2c%2]:123 Nov 1 00:29:09.177572 google-clock-skew[1590]: INFO Clock drift token has changed: 0. Nov 1 00:29:09.202417 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:29:09.202666 systemd[1595]: Reached target sockets.target - Sockets. Nov 1 00:29:09.202709 systemd[1595]: Reached target basic.target - Basic System. Nov 1 00:29:09.202789 systemd[1595]: Reached target default.target - Main User Target. Nov 1 00:29:09.202855 systemd[1595]: Startup finished in 322ms. Nov 1 00:29:09.203192 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:29:09.221229 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:29:09.239159 groupadd[1607]: group added to /etc/group: name=google-sudoers, GID=1000 Nov 1 00:29:09.248530 groupadd[1607]: group added to /etc/gshadow: name=google-sudoers Nov 1 00:29:09.255957 google-networking[1591]: INFO Starting Google Networking daemon. Nov 1 00:29:09.000460 google-clock-skew[1590]: INFO Synced system time with hardware clock. Nov 1 00:29:09.026082 systemd-journald[1107]: Time jumped backwards, rotating. Nov 1 00:29:09.015552 groupadd[1607]: new group: name=google-sudoers, GID=1000 Nov 1 00:29:09.003117 systemd-resolved[1358]: Clock change detected. Flushing caches. Nov 1 00:29:09.050824 google-accounts[1589]: INFO Starting Google Accounts daemon. Nov 1 00:29:09.063734 google-accounts[1589]: WARNING OS Login not installed. Nov 1 00:29:09.065326 google-accounts[1589]: INFO Creating a new user account for 0. Nov 1 00:29:09.076391 init.sh[1621]: useradd: invalid user name '0': use --badname to ignore Nov 1 00:29:09.079254 google-accounts[1589]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Nov 1 00:29:09.171405 systemd[1]: Started sshd@1-10.128.0.44:22-147.75.109.163:53366.service - OpenSSH per-connection server daemon (147.75.109.163:53366). Nov 1 00:29:09.292552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:29:09.304886 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:29:09.309601 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:29:09.315352 systemd[1]: Startup finished in 997ms (kernel) + 9.477s (initrd) + 9.450s (userspace) = 19.925s. Nov 1 00:29:09.486068 sshd[1625]: Accepted publickey for core from 147.75.109.163 port 53366 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:29:09.487981 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:29:09.496188 systemd-logind[1438]: New session 2 of user core. Nov 1 00:29:09.500232 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:29:09.700816 sshd[1625]: pam_unix(sshd:session): session closed for user core Nov 1 00:29:09.706946 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:29:09.707447 systemd[1]: sshd@1-10.128.0.44:22-147.75.109.163:53366.service: Deactivated successfully. Nov 1 00:29:09.710337 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:29:09.712644 systemd-logind[1438]: Removed session 2. Nov 1 00:29:09.764402 systemd[1]: Started sshd@2-10.128.0.44:22-147.75.109.163:53380.service - OpenSSH per-connection server daemon (147.75.109.163:53380). Nov 1 00:29:10.064144 sshd[1646]: Accepted publickey for core from 147.75.109.163 port 53380 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:29:10.066899 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:29:10.073517 systemd-logind[1438]: New session 3 of user core. Nov 1 00:29:10.080268 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:29:10.163720 kubelet[1632]: E1101 00:29:10.163646 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:29:10.167203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:29:10.167463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:29:10.167945 systemd[1]: kubelet.service: Consumed 1.319s CPU time. Nov 1 00:29:10.276515 sshd[1646]: pam_unix(sshd:session): session closed for user core Nov 1 00:29:10.281227 systemd[1]: sshd@2-10.128.0.44:22-147.75.109.163:53380.service: Deactivated successfully. Nov 1 00:29:10.283713 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:29:10.285671 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:29:10.287194 systemd-logind[1438]: Removed session 3. Nov 1 00:29:10.334377 systemd[1]: Started sshd@3-10.128.0.44:22-147.75.109.163:46826.service - OpenSSH per-connection server daemon (147.75.109.163:46826). Nov 1 00:29:10.625568 sshd[1655]: Accepted publickey for core from 147.75.109.163 port 46826 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:29:10.627493 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:29:10.633735 systemd-logind[1438]: New session 4 of user core. Nov 1 00:29:10.644280 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:29:10.843144 sshd[1655]: pam_unix(sshd:session): session closed for user core Nov 1 00:29:10.848654 systemd[1]: sshd@3-10.128.0.44:22-147.75.109.163:46826.service: Deactivated successfully. Nov 1 00:29:10.851225 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:29:10.852237 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:29:10.853733 systemd-logind[1438]: Removed session 4. Nov 1 00:29:10.899385 systemd[1]: Started sshd@4-10.128.0.44:22-147.75.109.163:46842.service - OpenSSH per-connection server daemon (147.75.109.163:46842). Nov 1 00:29:11.179204 sshd[1662]: Accepted publickey for core from 147.75.109.163 port 46842 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:29:11.181196 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:29:11.187690 systemd-logind[1438]: New session 5 of user core. Nov 1 00:29:11.194223 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:29:11.370624 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:29:11.371163 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:29:11.385890 sudo[1665]: pam_unix(sudo:session): session closed for user root Nov 1 00:29:11.428765 sshd[1662]: pam_unix(sshd:session): session closed for user core Nov 1 00:29:11.434550 systemd[1]: sshd@4-10.128.0.44:22-147.75.109.163:46842.service: Deactivated successfully. Nov 1 00:29:11.436859 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:29:11.437849 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:29:11.439396 systemd-logind[1438]: Removed session 5. Nov 1 00:29:11.483419 systemd[1]: Started sshd@5-10.128.0.44:22-147.75.109.163:46848.service - OpenSSH per-connection server daemon (147.75.109.163:46848). Nov 1 00:29:11.779750 sshd[1670]: Accepted publickey for core from 147.75.109.163 port 46848 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:29:11.781718 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:29:11.788111 systemd-logind[1438]: New session 6 of user core. Nov 1 00:29:11.799200 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:29:11.959475 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:29:11.960111 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:29:11.965415 sudo[1674]: pam_unix(sudo:session): session closed for user root Nov 1 00:29:11.980863 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:29:11.981466 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:29:11.998424 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:29:12.012332 auditctl[1677]: No rules Nov 1 00:29:12.012921 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:29:12.013205 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:29:12.024639 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:29:12.101354 augenrules[1695]: No rules Nov 1 00:29:12.101491 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:29:12.104217 sudo[1673]: pam_unix(sudo:session): session closed for user root Nov 1 00:29:12.147176 sshd[1670]: pam_unix(sshd:session): session closed for user core Nov 1 00:29:12.152336 systemd[1]: sshd@5-10.128.0.44:22-147.75.109.163:46848.service: Deactivated successfully. Nov 1 00:29:12.154490 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:29:12.155458 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:29:12.156974 systemd-logind[1438]: Removed session 6. Nov 1 00:29:12.205423 systemd[1]: Started sshd@6-10.128.0.44:22-147.75.109.163:46852.service - OpenSSH per-connection server daemon (147.75.109.163:46852). Nov 1 00:29:12.496158 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 46852 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:29:12.498290 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:29:12.504204 systemd-logind[1438]: New session 7 of user core. Nov 1 00:29:12.510238 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:29:12.675588 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:29:12.676170 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:29:13.113388 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:29:13.114968 (dockerd)[1723]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:29:13.540101 dockerd[1723]: time="2025-11-01T00:29:13.539920628Z" level=info msg="Starting up" Nov 1 00:29:13.813552 dockerd[1723]: time="2025-11-01T00:29:13.813408400Z" level=info msg="Loading containers: start." Nov 1 00:29:13.974063 kernel: Initializing XFRM netlink socket Nov 1 00:29:14.093936 systemd-networkd[1357]: docker0: Link UP Nov 1 00:29:14.124064 dockerd[1723]: time="2025-11-01T00:29:14.124000280Z" level=info msg="Loading containers: done." Nov 1 00:29:14.144732 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck200637249-merged.mount: Deactivated successfully. Nov 1 00:29:14.145615 dockerd[1723]: time="2025-11-01T00:29:14.145556972Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:29:14.145711 dockerd[1723]: time="2025-11-01T00:29:14.145686979Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:29:14.145874 dockerd[1723]: time="2025-11-01T00:29:14.145826637Z" level=info msg="Daemon has completed initialization" Nov 1 00:29:14.183532 dockerd[1723]: time="2025-11-01T00:29:14.183098103Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:29:14.183335 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:29:15.198685 containerd[1462]: time="2025-11-01T00:29:15.198622968Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:29:15.665118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480778387.mount: Deactivated successfully. Nov 1 00:29:17.269882 containerd[1462]: time="2025-11-01T00:29:17.269812364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:17.272191 containerd[1462]: time="2025-11-01T00:29:17.272115276Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28845499" Nov 1 00:29:17.274031 containerd[1462]: time="2025-11-01T00:29:17.273250134Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:17.277480 containerd[1462]: time="2025-11-01T00:29:17.277439068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:17.279316 containerd[1462]: time="2025-11-01T00:29:17.279273468Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.080597166s" Nov 1 00:29:17.279479 containerd[1462]: time="2025-11-01T00:29:17.279452524Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:29:17.280575 containerd[1462]: time="2025-11-01T00:29:17.280541744Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:29:18.745528 containerd[1462]: time="2025-11-01T00:29:18.745450329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:18.747232 containerd[1462]: time="2025-11-01T00:29:18.747140224Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24788961" Nov 1 00:29:18.748775 containerd[1462]: time="2025-11-01T00:29:18.748712653Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:18.752025 containerd[1462]: time="2025-11-01T00:29:18.751926619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:18.754031 containerd[1462]: time="2025-11-01T00:29:18.753554928Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.47296821s" Nov 1 00:29:18.754031 containerd[1462]: time="2025-11-01T00:29:18.753609742Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:29:18.754843 containerd[1462]: time="2025-11-01T00:29:18.754796174Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:29:19.955690 containerd[1462]: time="2025-11-01T00:29:19.955624486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:19.957356 containerd[1462]: time="2025-11-01T00:29:19.957290580Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19178205" Nov 1 00:29:19.959000 containerd[1462]: time="2025-11-01T00:29:19.958463754Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:19.962651 containerd[1462]: time="2025-11-01T00:29:19.962588999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:19.964564 containerd[1462]: time="2025-11-01T00:29:19.964373734Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.209528473s" Nov 1 00:29:19.964564 containerd[1462]: time="2025-11-01T00:29:19.964428694Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:29:19.965542 containerd[1462]: time="2025-11-01T00:29:19.965503395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:29:20.417776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:29:20.425300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:29:20.841386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:29:20.845091 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:29:20.923596 kubelet[1939]: E1101 00:29:20.923545 1939 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:29:20.930339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:29:20.930639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:29:21.401191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1220324879.mount: Deactivated successfully. Nov 1 00:29:22.119775 containerd[1462]: time="2025-11-01T00:29:22.119689577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:22.121434 containerd[1462]: time="2025-11-01T00:29:22.121170367Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30926101" Nov 1 00:29:22.124036 containerd[1462]: time="2025-11-01T00:29:22.122753140Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:22.126146 containerd[1462]: time="2025-11-01T00:29:22.126100196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:22.127217 containerd[1462]: time="2025-11-01T00:29:22.127165037Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.161607738s" Nov 1 00:29:22.127371 containerd[1462]: time="2025-11-01T00:29:22.127341547Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:29:22.128342 containerd[1462]: time="2025-11-01T00:29:22.128296871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:29:22.544337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548892982.mount: Deactivated successfully. Nov 1 00:29:23.785919 containerd[1462]: time="2025-11-01T00:29:23.783774467Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Nov 1 00:29:23.785919 containerd[1462]: time="2025-11-01T00:29:23.784771899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:23.788065 containerd[1462]: time="2025-11-01T00:29:23.787997296Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:23.789712 containerd[1462]: time="2025-11-01T00:29:23.789666242Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.661214563s" Nov 1 00:29:23.789877 containerd[1462]: time="2025-11-01T00:29:23.789849886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:29:23.790688 containerd[1462]: time="2025-11-01T00:29:23.790634164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:23.792060 containerd[1462]: time="2025-11-01T00:29:23.791534568Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:29:24.177517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618888301.mount: Deactivated successfully. Nov 1 00:29:24.184661 containerd[1462]: time="2025-11-01T00:29:24.184597446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:24.185956 containerd[1462]: time="2025-11-01T00:29:24.185835328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Nov 1 00:29:24.189025 containerd[1462]: time="2025-11-01T00:29:24.187251477Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:24.190197 containerd[1462]: time="2025-11-01T00:29:24.190156875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:24.191441 containerd[1462]: time="2025-11-01T00:29:24.191280532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 399.70263ms" Nov 1 00:29:24.191441 containerd[1462]: time="2025-11-01T00:29:24.191324915Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:29:24.192609 containerd[1462]: time="2025-11-01T00:29:24.192370883Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:29:24.645974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506769174.mount: Deactivated successfully. Nov 1 00:29:27.462330 containerd[1462]: time="2025-11-01T00:29:27.462255579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:27.464005 containerd[1462]: time="2025-11-01T00:29:27.463946820Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57689565" Nov 1 00:29:27.465827 containerd[1462]: time="2025-11-01T00:29:27.465175194Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:27.469031 containerd[1462]: time="2025-11-01T00:29:27.468969644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:27.470726 containerd[1462]: time="2025-11-01T00:29:27.470681182Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.278270058s" Nov 1 00:29:27.470816 containerd[1462]: time="2025-11-01T00:29:27.470731941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:29:31.181170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:29:31.190298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:29:31.447464 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:29:31.447934 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:29:31.448347 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:29:31.456495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:29:31.506607 systemd[1]: Reloading requested from client PID 2093 ('systemctl') (unit session-7.scope)... Nov 1 00:29:31.506628 systemd[1]: Reloading... Nov 1 00:29:31.683174 zram_generator::config[2137]: No configuration found. Nov 1 00:29:31.834344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:29:31.940812 systemd[1]: Reloading finished in 433 ms. Nov 1 00:29:32.005818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:29:32.013891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:29:32.015292 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:29:32.015595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:29:32.022407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:29:32.305228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:29:32.316759 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:29:32.375869 kubelet[2187]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:29:32.375869 kubelet[2187]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:29:32.375869 kubelet[2187]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:29:32.376414 kubelet[2187]: I1101 00:29:32.375949 2187 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:29:33.031097 kubelet[2187]: I1101 00:29:33.031040 2187 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:29:33.031097 kubelet[2187]: I1101 00:29:33.031080 2187 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:29:33.031516 kubelet[2187]: I1101 00:29:33.031477 2187 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:29:33.072687 kubelet[2187]: E1101 00:29:33.072630 2187 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:29:33.079696 kubelet[2187]: I1101 00:29:33.079529 2187 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:29:33.092272 kubelet[2187]: E1101 00:29:33.092232 2187 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:29:33.092272 kubelet[2187]: I1101 00:29:33.092271 2187 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:29:33.095514 kubelet[2187]: I1101 00:29:33.095471 2187 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:29:33.098108 kubelet[2187]: I1101 00:29:33.098051 2187 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:29:33.098326 kubelet[2187]: I1101 00:29:33.098102 2187 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:29:33.098535 kubelet[2187]: I1101 00:29:33.098333 2187 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:29:33.098535 kubelet[2187]: I1101 00:29:33.098352 2187 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:29:33.098535 kubelet[2187]: I1101 00:29:33.098521 2187 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:29:33.105084 kubelet[2187]: I1101 00:29:33.104921 2187 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:29:33.105084 kubelet[2187]: I1101 00:29:33.104969 2187 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:29:33.105084 kubelet[2187]: I1101 00:29:33.105000 2187 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:29:33.105084 kubelet[2187]: I1101 00:29:33.105035 2187 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:29:33.113840 kubelet[2187]: W1101 00:29:33.113215 2187 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84&limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Nov 1 00:29:33.113840 kubelet[2187]: E1101 00:29:33.113312 2187 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:29:33.114209 kubelet[2187]: W1101 00:29:33.114143 2187 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Nov 1 00:29:33.114289 kubelet[2187]: E1101 00:29:33.114226 2187 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:29:33.114388 kubelet[2187]: I1101 00:29:33.114363 2187 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:29:33.116033 kubelet[2187]: I1101 00:29:33.115068 2187 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:29:33.116132 kubelet[2187]: W1101 00:29:33.116085 2187 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:29:33.120984 kubelet[2187]: I1101 00:29:33.120949 2187 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:29:33.121446 kubelet[2187]: I1101 00:29:33.121401 2187 server.go:1287] "Started kubelet" Nov 1 00:29:33.122981 kubelet[2187]: I1101 00:29:33.122930 2187 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:29:33.131108 kubelet[2187]: E1101 00:29:33.129102 2187 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84.1873ba882465d622 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,UID:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,},FirstTimestamp:2025-11-01 00:29:33.120968226 +0000 UTC m=+0.798802953,LastTimestamp:2025-11-01 00:29:33.120968226 +0000 UTC m=+0.798802953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,}" Nov 1 00:29:33.135439 kubelet[2187]: I1101 00:29:33.133105 2187 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:29:33.135439 kubelet[2187]: I1101 00:29:33.133880 2187 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:29:33.135439 kubelet[2187]: I1101 00:29:33.134273 2187 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:29:33.135439 kubelet[2187]: I1101 00:29:33.134470 2187 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:29:33.135439 kubelet[2187]: I1101 00:29:33.134542 2187 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:29:33.137381 kubelet[2187]: I1101 00:29:33.137359 2187 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:29:33.137800 kubelet[2187]: E1101 00:29:33.137772 2187 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" Nov 1 00:29:33.139090 kubelet[2187]: E1101 00:29:33.139051 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="200ms" Nov 1 00:29:33.139182 kubelet[2187]: I1101 00:29:33.139144 2187 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:29:33.139243 kubelet[2187]: I1101 00:29:33.139184 2187 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:29:33.139604 kubelet[2187]: W1101 00:29:33.139550 2187 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Nov 1 00:29:33.139691 kubelet[2187]: E1101 00:29:33.139622 2187 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:29:33.141367 kubelet[2187]: I1101 00:29:33.141315 2187 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:29:33.141458 kubelet[2187]: I1101 00:29:33.141413 2187 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:29:33.143068 kubelet[2187]: E1101 00:29:33.142723 2187 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:29:33.143442 kubelet[2187]: I1101 00:29:33.143417 2187 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:29:33.162523 kubelet[2187]: I1101 00:29:33.162452 2187 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:29:33.163968 kubelet[2187]: I1101 00:29:33.163920 2187 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:29:33.163968 kubelet[2187]: I1101 00:29:33.163949 2187 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:29:33.163968 kubelet[2187]: I1101 00:29:33.163973 2187 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:29:33.164204 kubelet[2187]: I1101 00:29:33.163992 2187 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:29:33.164204 kubelet[2187]: E1101 00:29:33.164077 2187 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:29:33.175978 kubelet[2187]: W1101 00:29:33.175591 2187 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Nov 1 00:29:33.175978 kubelet[2187]: E1101 00:29:33.175669 2187 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:29:33.184112 kubelet[2187]: I1101 00:29:33.184070 2187 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:29:33.184112 kubelet[2187]: I1101 00:29:33.184091 2187 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:29:33.184112 kubelet[2187]: I1101 00:29:33.184114 2187 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:29:33.188068 kubelet[2187]: I1101 00:29:33.188037 2187 policy_none.go:49] "None policy: Start" Nov 1 00:29:33.188068 kubelet[2187]: I1101 00:29:33.188072 2187 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:29:33.188215 kubelet[2187]: I1101 00:29:33.188090 2187 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:29:33.197109 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:29:33.212704 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:29:33.219091 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:29:33.224473 kubelet[2187]: I1101 00:29:33.223145 2187 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:29:33.224473 kubelet[2187]: I1101 00:29:33.223378 2187 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:29:33.224473 kubelet[2187]: I1101 00:29:33.223394 2187 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:29:33.224473 kubelet[2187]: I1101 00:29:33.224277 2187 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:29:33.228177 kubelet[2187]: E1101 00:29:33.228127 2187 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:29:33.228364 kubelet[2187]: E1101 00:29:33.228344 2187 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" Nov 1 00:29:33.282504 systemd[1]: Created slice kubepods-burstable-pod00ca3feafd768fcca3dbafc8bdc1f17a.slice - libcontainer container kubepods-burstable-pod00ca3feafd768fcca3dbafc8bdc1f17a.slice. Nov 1 00:29:33.299991 kubelet[2187]: E1101 00:29:33.299957 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.304802 systemd[1]: Created slice kubepods-burstable-pod49183c1c1e14f29de127193170c870db.slice - libcontainer container kubepods-burstable-pod49183c1c1e14f29de127193170c870db.slice. Nov 1 00:29:33.319588 kubelet[2187]: E1101 00:29:33.318845 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.323556 systemd[1]: Created slice kubepods-burstable-podfd5e21633119b28a014929d1f3c0330f.slice - libcontainer container kubepods-burstable-podfd5e21633119b28a014929d1f3c0330f.slice. Nov 1 00:29:33.326694 kubelet[2187]: E1101 00:29:33.326658 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.328414 kubelet[2187]: I1101 00:29:33.328374 2187 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.328956 kubelet[2187]: E1101 00:29:33.328889 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.339691 kubelet[2187]: E1101 00:29:33.339645 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="400ms" Nov 1 00:29:33.340789 kubelet[2187]: I1101 00:29:33.340694 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00ca3feafd768fcca3dbafc8bdc1f17a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"00ca3feafd768fcca3dbafc8bdc1f17a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.340789 kubelet[2187]: I1101 00:29:33.340773 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.341059 kubelet[2187]: I1101 00:29:33.340814 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.341059 kubelet[2187]: I1101 00:29:33.340847 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.341059 kubelet[2187]: I1101 00:29:33.340886 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00ca3feafd768fcca3dbafc8bdc1f17a-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"00ca3feafd768fcca3dbafc8bdc1f17a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.341059 kubelet[2187]: I1101 00:29:33.340915 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00ca3feafd768fcca3dbafc8bdc1f17a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"00ca3feafd768fcca3dbafc8bdc1f17a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.341214 kubelet[2187]: I1101 00:29:33.340943 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.341214 kubelet[2187]: I1101 00:29:33.340972 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.341214 kubelet[2187]: I1101 00:29:33.340999 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd5e21633119b28a014929d1f3c0330f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"fd5e21633119b28a014929d1f3c0330f\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.534331 kubelet[2187]: I1101 00:29:33.534190 2187 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.534883 kubelet[2187]: E1101 00:29:33.534659 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.602407 containerd[1462]: time="2025-11-01T00:29:33.601752003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,Uid:00ca3feafd768fcca3dbafc8bdc1f17a,Namespace:kube-system,Attempt:0,}" Nov 1 00:29:33.620709 containerd[1462]: time="2025-11-01T00:29:33.620624341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,Uid:49183c1c1e14f29de127193170c870db,Namespace:kube-system,Attempt:0,}" Nov 1 00:29:33.628627 containerd[1462]: time="2025-11-01T00:29:33.628576491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,Uid:fd5e21633119b28a014929d1f3c0330f,Namespace:kube-system,Attempt:0,}" Nov 1 00:29:33.740316 kubelet[2187]: E1101 00:29:33.740242 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="800ms" Nov 1 00:29:33.940408 kubelet[2187]: I1101 00:29:33.939317 2187 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.940408 kubelet[2187]: E1101 00:29:33.939906 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.44:6443/api/v1/nodes\": dial tcp 10.128.0.44:6443: connect: connection refused" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:33.962799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050427067.mount: Deactivated successfully. Nov 1 00:29:33.976042 containerd[1462]: time="2025-11-01T00:29:33.974029809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:29:33.978485 containerd[1462]: time="2025-11-01T00:29:33.978416965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:29:33.980239 containerd[1462]: time="2025-11-01T00:29:33.980184221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Nov 1 00:29:33.983629 containerd[1462]: time="2025-11-01T00:29:33.983116964Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:29:33.985917 containerd[1462]: time="2025-11-01T00:29:33.985504627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:29:33.985917 containerd[1462]: time="2025-11-01T00:29:33.985608830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:29:33.985917 containerd[1462]: time="2025-11-01T00:29:33.985805598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:29:33.989429 containerd[1462]: time="2025-11-01T00:29:33.989391292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:29:33.992201 containerd[1462]: time="2025-11-01T00:29:33.992163784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 371.442027ms" Nov 1 00:29:33.994573 containerd[1462]: time="2025-11-01T00:29:33.994523298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 365.861836ms" Nov 1 00:29:33.995715 containerd[1462]: time="2025-11-01T00:29:33.995465329Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 393.467488ms" Nov 1 00:29:34.023802 kubelet[2187]: W1101 00:29:34.020082 2187 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Nov 1 00:29:34.023802 kubelet[2187]: E1101 00:29:34.020170 2187 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:29:34.073435 kubelet[2187]: W1101 00:29:34.073385 2187 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Nov 1 00:29:34.073627 kubelet[2187]: E1101 00:29:34.073456 2187 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:29:34.198332 containerd[1462]: time="2025-11-01T00:29:34.197727145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:29:34.198332 containerd[1462]: time="2025-11-01T00:29:34.197811404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:29:34.198332 containerd[1462]: time="2025-11-01T00:29:34.197840800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:34.198332 containerd[1462]: time="2025-11-01T00:29:34.197970396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:34.201783 containerd[1462]: time="2025-11-01T00:29:34.201609806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:29:34.201783 containerd[1462]: time="2025-11-01T00:29:34.201718081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:29:34.202429 containerd[1462]: time="2025-11-01T00:29:34.201794331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:34.202429 containerd[1462]: time="2025-11-01T00:29:34.201974722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:34.208879 containerd[1462]: time="2025-11-01T00:29:34.208152241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:29:34.208879 containerd[1462]: time="2025-11-01T00:29:34.208255089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:29:34.208879 containerd[1462]: time="2025-11-01T00:29:34.208283778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:34.208879 containerd[1462]: time="2025-11-01T00:29:34.208424540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:34.228035 kubelet[2187]: W1101 00:29:34.227635 2187 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Nov 1 00:29:34.228035 kubelet[2187]: E1101 00:29:34.227730 2187 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:29:34.248619 systemd[1]: Started cri-containerd-018132f2f15319eb02370c2dd27fb0c4cd91ef7913b6aef04ffd648789a1a2ab.scope - libcontainer container 018132f2f15319eb02370c2dd27fb0c4cd91ef7913b6aef04ffd648789a1a2ab. Nov 1 00:29:34.259241 systemd[1]: Started cri-containerd-ba29d3bd2e7841786dbda3b5845e3ccb1afd52468e351b2e249671dbbd5ccd5f.scope - libcontainer container ba29d3bd2e7841786dbda3b5845e3ccb1afd52468e351b2e249671dbbd5ccd5f. Nov 1 00:29:34.272429 systemd[1]: Started cri-containerd-ad5ff02a15c308d2c0aca05e2bf3e7af8a6dea679c0c92d3213639314c96c493.scope - libcontainer container ad5ff02a15c308d2c0aca05e2bf3e7af8a6dea679c0c92d3213639314c96c493. Nov 1 00:29:34.345947 containerd[1462]: time="2025-11-01T00:29:34.345881893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,Uid:00ca3feafd768fcca3dbafc8bdc1f17a,Namespace:kube-system,Attempt:0,} returns sandbox id \"018132f2f15319eb02370c2dd27fb0c4cd91ef7913b6aef04ffd648789a1a2ab\"" Nov 1 00:29:34.351538 kubelet[2187]: E1101 00:29:34.350970 2187 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e7" Nov 1 00:29:34.353432 containerd[1462]: time="2025-11-01T00:29:34.353361455Z" level=info msg="CreateContainer within sandbox \"018132f2f15319eb02370c2dd27fb0c4cd91ef7913b6aef04ffd648789a1a2ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:29:34.381598 containerd[1462]: time="2025-11-01T00:29:34.381548608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,Uid:fd5e21633119b28a014929d1f3c0330f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba29d3bd2e7841786dbda3b5845e3ccb1afd52468e351b2e249671dbbd5ccd5f\"" Nov 1 00:29:34.385059 containerd[1462]: time="2025-11-01T00:29:34.384423399Z" level=info msg="CreateContainer within sandbox \"018132f2f15319eb02370c2dd27fb0c4cd91ef7913b6aef04ffd648789a1a2ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"04bad82a379a9bd7eac2e9f383b3592a159b3c8c663741f56876e0265ad1ce1e\"" Nov 1 00:29:34.387905 containerd[1462]: time="2025-11-01T00:29:34.386177198Z" level=info msg="StartContainer for \"04bad82a379a9bd7eac2e9f383b3592a159b3c8c663741f56876e0265ad1ce1e\"" Nov 1 00:29:34.388986 kubelet[2187]: E1101 00:29:34.388946 2187 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e7" Nov 1 00:29:34.391111 containerd[1462]: time="2025-11-01T00:29:34.391066827Z" level=info msg="CreateContainer within sandbox \"ba29d3bd2e7841786dbda3b5845e3ccb1afd52468e351b2e249671dbbd5ccd5f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:29:34.403670 containerd[1462]: time="2025-11-01T00:29:34.403633881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84,Uid:49183c1c1e14f29de127193170c870db,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad5ff02a15c308d2c0aca05e2bf3e7af8a6dea679c0c92d3213639314c96c493\"" Nov 1 00:29:34.407533 kubelet[2187]: E1101 00:29:34.407377 2187 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c9" Nov 1 00:29:34.411081 containerd[1462]: time="2025-11-01T00:29:34.410858731Z" level=info msg="CreateContainer within sandbox \"ad5ff02a15c308d2c0aca05e2bf3e7af8a6dea679c0c92d3213639314c96c493\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:29:34.419333 containerd[1462]: time="2025-11-01T00:29:34.419297084Z" level=info msg="CreateContainer within sandbox \"ba29d3bd2e7841786dbda3b5845e3ccb1afd52468e351b2e249671dbbd5ccd5f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1f8bf3164501726d9e414b542490abc095b3747613e9c54e7d5e4534784edd1a\"" Nov 1 00:29:34.421421 containerd[1462]: time="2025-11-01T00:29:34.419916515Z" level=info msg="StartContainer for \"1f8bf3164501726d9e414b542490abc095b3747613e9c54e7d5e4534784edd1a\"" Nov 1 00:29:34.442591 containerd[1462]: time="2025-11-01T00:29:34.442106204Z" level=info msg="CreateContainer within sandbox \"ad5ff02a15c308d2c0aca05e2bf3e7af8a6dea679c0c92d3213639314c96c493\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"15990275f5e524838b4b63dfd45562916d0edc2ab50e8359e22e648bf122bf2b\"" Nov 1 00:29:34.442258 systemd[1]: Started cri-containerd-04bad82a379a9bd7eac2e9f383b3592a159b3c8c663741f56876e0265ad1ce1e.scope - libcontainer container 04bad82a379a9bd7eac2e9f383b3592a159b3c8c663741f56876e0265ad1ce1e. Nov 1 00:29:34.444801 containerd[1462]: time="2025-11-01T00:29:34.444746074Z" level=info msg="StartContainer for \"15990275f5e524838b4b63dfd45562916d0edc2ab50e8359e22e648bf122bf2b\"" Nov 1 00:29:34.492227 kubelet[2187]: W1101 00:29:34.490636 2187 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84&limit=500&resourceVersion=0": dial tcp 10.128.0.44:6443: connect: connection refused Nov 1 00:29:34.492498 kubelet[2187]: E1101 00:29:34.492438 2187 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84&limit=500&resourceVersion=0\": dial tcp 10.128.0.44:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:29:34.506285 systemd[1]: Started cri-containerd-1f8bf3164501726d9e414b542490abc095b3747613e9c54e7d5e4534784edd1a.scope - libcontainer container 1f8bf3164501726d9e414b542490abc095b3747613e9c54e7d5e4534784edd1a. Nov 1 00:29:34.519666 systemd[1]: Started cri-containerd-15990275f5e524838b4b63dfd45562916d0edc2ab50e8359e22e648bf122bf2b.scope - libcontainer container 15990275f5e524838b4b63dfd45562916d0edc2ab50e8359e22e648bf122bf2b. Nov 1 00:29:34.543140 kubelet[2187]: E1101 00:29:34.543057 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84?timeout=10s\": dial tcp 10.128.0.44:6443: connect: connection refused" interval="1.6s" Nov 1 00:29:34.571459 containerd[1462]: time="2025-11-01T00:29:34.569296763Z" level=info msg="StartContainer for \"04bad82a379a9bd7eac2e9f383b3592a159b3c8c663741f56876e0265ad1ce1e\" returns successfully" Nov 1 00:29:34.614769 containerd[1462]: time="2025-11-01T00:29:34.613948268Z" level=info msg="StartContainer for \"1f8bf3164501726d9e414b542490abc095b3747613e9c54e7d5e4534784edd1a\" returns successfully" Nov 1 00:29:34.652742 containerd[1462]: time="2025-11-01T00:29:34.651912150Z" level=info msg="StartContainer for \"15990275f5e524838b4b63dfd45562916d0edc2ab50e8359e22e648bf122bf2b\" returns successfully" Nov 1 00:29:34.745370 kubelet[2187]: I1101 00:29:34.745245 2187 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:35.202511 kubelet[2187]: E1101 00:29:35.202471 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:35.203307 kubelet[2187]: E1101 00:29:35.203272 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:35.208417 kubelet[2187]: E1101 00:29:35.208383 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:36.225226 kubelet[2187]: E1101 00:29:36.225181 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:36.225777 kubelet[2187]: E1101 00:29:36.225663 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:36.644215 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:29:37.223986 kubelet[2187]: E1101 00:29:37.223933 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:38.365153 kubelet[2187]: E1101 00:29:38.365104 2187 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:39.311424 kubelet[2187]: E1101 00:29:39.311001 2187 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:39.380195 kubelet[2187]: I1101 00:29:39.380142 2187 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:39.381472 kubelet[2187]: E1101 00:29:39.380224 2187 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\": node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" not found" Nov 1 00:29:39.438902 kubelet[2187]: I1101 00:29:39.438452 2187 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:39.447467 kubelet[2187]: E1101 00:29:39.447189 2187 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:39.447467 kubelet[2187]: I1101 00:29:39.447229 2187 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:39.449651 kubelet[2187]: E1101 00:29:39.449498 2187 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:39.449651 kubelet[2187]: I1101 00:29:39.449531 2187 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:39.451674 kubelet[2187]: E1101 00:29:39.451602 2187 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:40.117877 kubelet[2187]: I1101 00:29:40.117833 2187 apiserver.go:52] "Watching apiserver" Nov 1 00:29:40.140502 kubelet[2187]: I1101 00:29:40.139948 2187 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:29:41.292754 systemd[1]: Reloading requested from client PID 2461 ('systemctl') (unit session-7.scope)... Nov 1 00:29:41.292774 systemd[1]: Reloading... Nov 1 00:29:41.429164 zram_generator::config[2501]: No configuration found. Nov 1 00:29:41.571983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:29:41.694418 systemd[1]: Reloading finished in 400 ms. Nov 1 00:29:41.747996 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:29:41.754376 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:29:41.754671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:29:41.754737 systemd[1]: kubelet.service: Consumed 1.274s CPU time, 134.1M memory peak, 0B memory swap peak. Nov 1 00:29:41.761465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:29:42.118234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:29:42.127865 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:29:42.197775 kubelet[2549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:29:42.198274 kubelet[2549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:29:42.198366 kubelet[2549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:29:42.198677 kubelet[2549]: I1101 00:29:42.198624 2549 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:29:42.210751 kubelet[2549]: I1101 00:29:42.210676 2549 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:29:42.210943 kubelet[2549]: I1101 00:29:42.210920 2549 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:29:42.211675 kubelet[2549]: I1101 00:29:42.211648 2549 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:29:42.213275 kubelet[2549]: I1101 00:29:42.213252 2549 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:29:42.215895 kubelet[2549]: I1101 00:29:42.215870 2549 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:29:42.222040 kubelet[2549]: E1101 00:29:42.220492 2549 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:29:42.222040 kubelet[2549]: I1101 00:29:42.220543 2549 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:29:42.223825 kubelet[2549]: I1101 00:29:42.223804 2549 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:29:42.224371 kubelet[2549]: I1101 00:29:42.224339 2549 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:29:42.224732 kubelet[2549]: I1101 00:29:42.224475 2549 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:29:42.224942 kubelet[2549]: I1101 00:29:42.224925 2549 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:29:42.225075 kubelet[2549]: I1101 00:29:42.225062 2549 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:29:42.225225 kubelet[2549]: I1101 00:29:42.225211 2549 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:29:42.225505 kubelet[2549]: I1101 00:29:42.225489 2549 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:29:42.225621 kubelet[2549]: I1101 00:29:42.225607 2549 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:29:42.225733 kubelet[2549]: I1101 00:29:42.225720 2549 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:29:42.225840 kubelet[2549]: I1101 00:29:42.225826 2549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:29:42.242184 kubelet[2549]: I1101 00:29:42.242145 2549 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:29:42.242911 kubelet[2549]: I1101 00:29:42.242874 2549 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:29:42.243666 kubelet[2549]: I1101 00:29:42.243630 2549 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:29:42.243764 kubelet[2549]: I1101 00:29:42.243688 2549 server.go:1287] "Started kubelet" Nov 1 00:29:42.248976 kubelet[2549]: I1101 00:29:42.248901 2549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:29:42.253844 kubelet[2549]: I1101 00:29:42.253781 2549 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:29:42.254985 kubelet[2549]: I1101 00:29:42.254966 2549 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:29:42.257577 kubelet[2549]: I1101 00:29:42.257552 2549 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:29:42.258241 kubelet[2549]: I1101 00:29:42.258216 2549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:29:42.261062 kubelet[2549]: I1101 00:29:42.250732 2549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:29:42.261629 kubelet[2549]: I1101 00:29:42.261608 2549 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:29:42.261954 kubelet[2549]: I1101 00:29:42.261913 2549 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:29:42.262336 kubelet[2549]: I1101 00:29:42.262320 2549 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:29:42.265469 kubelet[2549]: E1101 00:29:42.265443 2549 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:29:42.265915 kubelet[2549]: I1101 00:29:42.265731 2549 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:29:42.266474 kubelet[2549]: I1101 00:29:42.266332 2549 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:29:42.277276 kubelet[2549]: I1101 00:29:42.274701 2549 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:29:42.290043 kubelet[2549]: I1101 00:29:42.289982 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:29:42.295782 kubelet[2549]: I1101 00:29:42.295757 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:29:42.298672 kubelet[2549]: I1101 00:29:42.297121 2549 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:29:42.298776 kubelet[2549]: I1101 00:29:42.298707 2549 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:29:42.301353 kubelet[2549]: I1101 00:29:42.301277 2549 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:29:42.302633 kubelet[2549]: E1101 00:29:42.301376 2549 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:29:42.377752 kubelet[2549]: I1101 00:29:42.376721 2549 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:29:42.377752 kubelet[2549]: I1101 00:29:42.376746 2549 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:29:42.377752 kubelet[2549]: I1101 00:29:42.376772 2549 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:29:42.377752 kubelet[2549]: I1101 00:29:42.377007 2549 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:29:42.377752 kubelet[2549]: I1101 00:29:42.377043 2549 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:29:42.377752 kubelet[2549]: I1101 00:29:42.377074 2549 policy_none.go:49] "None policy: Start" Nov 1 00:29:42.377752 kubelet[2549]: I1101 00:29:42.377090 2549 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:29:42.377752 kubelet[2549]: I1101 00:29:42.377107 2549 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:29:42.377752 kubelet[2549]: I1101 00:29:42.377305 2549 state_mem.go:75] "Updated machine memory state" Nov 1 00:29:42.389039 kubelet[2549]: I1101 00:29:42.388961 2549 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:29:42.390867 kubelet[2549]: I1101 00:29:42.390808 2549 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:29:42.390867 kubelet[2549]: I1101 00:29:42.390833 2549 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:29:42.391754 kubelet[2549]: I1101 00:29:42.391466 2549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:29:42.397061 kubelet[2549]: E1101 00:29:42.395153 2549 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:29:42.402880 kubelet[2549]: I1101 00:29:42.402822 2549 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.403925 kubelet[2549]: I1101 00:29:42.403866 2549 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.404784 kubelet[2549]: I1101 00:29:42.404761 2549 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.420238 kubelet[2549]: W1101 00:29:42.420183 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 1 00:29:42.421330 kubelet[2549]: W1101 00:29:42.421121 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 1 00:29:42.422333 kubelet[2549]: W1101 00:29:42.422309 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 1 00:29:42.465754 kubelet[2549]: I1101 00:29:42.465709 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.465989 kubelet[2549]: I1101 00:29:42.465966 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd5e21633119b28a014929d1f3c0330f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"fd5e21633119b28a014929d1f3c0330f\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.466151 kubelet[2549]: I1101 00:29:42.466128 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00ca3feafd768fcca3dbafc8bdc1f17a-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"00ca3feafd768fcca3dbafc8bdc1f17a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.466321 kubelet[2549]: I1101 00:29:42.466302 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00ca3feafd768fcca3dbafc8bdc1f17a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"00ca3feafd768fcca3dbafc8bdc1f17a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.466476 kubelet[2549]: I1101 00:29:42.466453 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00ca3feafd768fcca3dbafc8bdc1f17a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"00ca3feafd768fcca3dbafc8bdc1f17a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.466605 kubelet[2549]: I1101 00:29:42.466586 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.466804 kubelet[2549]: I1101 00:29:42.466749 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.466804 kubelet[2549]: I1101 00:29:42.466790 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.466949 kubelet[2549]: I1101 00:29:42.466819 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49183c1c1e14f29de127193170c870db-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" (UID: \"49183c1c1e14f29de127193170c870db\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.517141 kubelet[2549]: I1101 00:29:42.516329 2549 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.528043 kubelet[2549]: I1101 00:29:42.527066 2549 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:42.528043 kubelet[2549]: I1101 00:29:42.527384 2549 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:43.242285 kubelet[2549]: I1101 00:29:43.241982 2549 apiserver.go:52] "Watching apiserver" Nov 1 00:29:43.262856 kubelet[2549]: I1101 00:29:43.262784 2549 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:29:43.330245 kubelet[2549]: I1101 00:29:43.329232 2549 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:43.341711 kubelet[2549]: W1101 00:29:43.341683 2549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 1 00:29:43.342080 kubelet[2549]: E1101 00:29:43.342038 2549 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:29:43.360947 kubelet[2549]: I1101 00:29:43.360879 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" podStartSLOduration=1.36083832 podStartE2EDuration="1.36083832s" podCreationTimestamp="2025-11-01 00:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:29:43.360504497 +0000 UTC m=+1.226920817" watchObservedRunningTime="2025-11-01 00:29:43.36083832 +0000 UTC m=+1.227254643" Nov 1 00:29:43.384266 kubelet[2549]: I1101 00:29:43.383664 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" podStartSLOduration=1.383439586 podStartE2EDuration="1.383439586s" podCreationTimestamp="2025-11-01 00:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:29:43.372089138 +0000 UTC m=+1.238505457" watchObservedRunningTime="2025-11-01 00:29:43.383439586 +0000 UTC m=+1.249855895" Nov 1 00:29:43.384812 kubelet[2549]: I1101 00:29:43.384222 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" podStartSLOduration=1.384205434 podStartE2EDuration="1.384205434s" podCreationTimestamp="2025-11-01 00:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:29:43.384033866 +0000 UTC m=+1.250450177" watchObservedRunningTime="2025-11-01 00:29:43.384205434 +0000 UTC m=+1.250621753" Nov 1 00:29:48.091336 kubelet[2549]: I1101 00:29:48.091265 2549 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:29:48.092003 containerd[1462]: time="2025-11-01T00:29:48.091947925Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:29:48.092707 kubelet[2549]: I1101 00:29:48.092258 2549 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:29:49.061111 systemd[1]: Created slice kubepods-besteffort-podad3e40a9_147f_4538_bd0f_9000e7538493.slice - libcontainer container kubepods-besteffort-podad3e40a9_147f_4538_bd0f_9000e7538493.slice. Nov 1 00:29:49.071445 kubelet[2549]: W1101 00:29:49.071405 2549 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object Nov 1 00:29:49.073049 kubelet[2549]: E1101 00:29:49.072069 2549 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object" logger="UnhandledError" Nov 1 00:29:49.113371 kubelet[2549]: I1101 00:29:49.113271 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad3e40a9-147f-4538-bd0f-9000e7538493-xtables-lock\") pod \"kube-proxy-px2mx\" (UID: \"ad3e40a9-147f-4538-bd0f-9000e7538493\") " pod="kube-system/kube-proxy-px2mx" Nov 1 00:29:49.113951 kubelet[2549]: I1101 00:29:49.113411 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad3e40a9-147f-4538-bd0f-9000e7538493-kube-proxy\") pod \"kube-proxy-px2mx\" (UID: \"ad3e40a9-147f-4538-bd0f-9000e7538493\") " pod="kube-system/kube-proxy-px2mx" Nov 1 00:29:49.113951 kubelet[2549]: I1101 00:29:49.113509 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad3e40a9-147f-4538-bd0f-9000e7538493-lib-modules\") pod \"kube-proxy-px2mx\" (UID: \"ad3e40a9-147f-4538-bd0f-9000e7538493\") " pod="kube-system/kube-proxy-px2mx" Nov 1 00:29:49.113951 kubelet[2549]: I1101 00:29:49.113543 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4nb\" (UniqueName: \"kubernetes.io/projected/ad3e40a9-147f-4538-bd0f-9000e7538493-kube-api-access-hl4nb\") pod \"kube-proxy-px2mx\" (UID: \"ad3e40a9-147f-4538-bd0f-9000e7538493\") " pod="kube-system/kube-proxy-px2mx" Nov 1 00:29:49.261363 systemd[1]: Created slice kubepods-besteffort-pod14c64e76_c338_475a_9bca_aab75ca3cad2.slice - libcontainer container kubepods-besteffort-pod14c64e76_c338_475a_9bca_aab75ca3cad2.slice. Nov 1 00:29:49.315757 kubelet[2549]: I1101 00:29:49.315554 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/14c64e76-c338-475a-9bca-aab75ca3cad2-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cnvqg\" (UID: \"14c64e76-c338-475a-9bca-aab75ca3cad2\") " pod="tigera-operator/tigera-operator-7dcd859c48-cnvqg" Nov 1 00:29:49.315757 kubelet[2549]: I1101 00:29:49.315618 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6twf\" (UniqueName: \"kubernetes.io/projected/14c64e76-c338-475a-9bca-aab75ca3cad2-kube-api-access-f6twf\") pod \"tigera-operator-7dcd859c48-cnvqg\" (UID: \"14c64e76-c338-475a-9bca-aab75ca3cad2\") " pod="tigera-operator/tigera-operator-7dcd859c48-cnvqg" Nov 1 00:29:49.566633 containerd[1462]: time="2025-11-01T00:29:49.566438866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cnvqg,Uid:14c64e76-c338-475a-9bca-aab75ca3cad2,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:29:49.647051 containerd[1462]: time="2025-11-01T00:29:49.645576410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:29:49.647051 containerd[1462]: time="2025-11-01T00:29:49.645691545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:29:49.647051 containerd[1462]: time="2025-11-01T00:29:49.645721518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:49.647051 containerd[1462]: time="2025-11-01T00:29:49.645880143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:49.687234 systemd[1]: Started cri-containerd-b65b422f86a42230a7353095b7f3c4f454cd80ae843b5e91e4842f9124a8ce54.scope - libcontainer container b65b422f86a42230a7353095b7f3c4f454cd80ae843b5e91e4842f9124a8ce54. Nov 1 00:29:49.744106 containerd[1462]: time="2025-11-01T00:29:49.744056468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cnvqg,Uid:14c64e76-c338-475a-9bca-aab75ca3cad2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b65b422f86a42230a7353095b7f3c4f454cd80ae843b5e91e4842f9124a8ce54\"" Nov 1 00:29:49.746720 containerd[1462]: time="2025-11-01T00:29:49.746676813Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:29:50.217392 kubelet[2549]: E1101 00:29:50.215387 2549 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:29:50.217392 kubelet[2549]: E1101 00:29:50.215509 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ad3e40a9-147f-4538-bd0f-9000e7538493-kube-proxy podName:ad3e40a9-147f-4538-bd0f-9000e7538493 nodeName:}" failed. No retries permitted until 2025-11-01 00:29:50.715475554 +0000 UTC m=+8.581891860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ad3e40a9-147f-4538-bd0f-9000e7538493-kube-proxy") pod "kube-proxy-px2mx" (UID: "ad3e40a9-147f-4538-bd0f-9000e7538493") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:29:50.878171 containerd[1462]: time="2025-11-01T00:29:50.878042553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-px2mx,Uid:ad3e40a9-147f-4538-bd0f-9000e7538493,Namespace:kube-system,Attempt:0,}" Nov 1 00:29:50.913056 containerd[1462]: time="2025-11-01T00:29:50.912456870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:29:50.913056 containerd[1462]: time="2025-11-01T00:29:50.912548753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:29:50.913056 containerd[1462]: time="2025-11-01T00:29:50.912578624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:50.913056 containerd[1462]: time="2025-11-01T00:29:50.912753726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:29:50.946239 systemd[1]: Started cri-containerd-77537ad536e1ce6e19ac2c8c7e92dc583e90b8415362c280fa4bc6cc0e2cac52.scope - libcontainer container 77537ad536e1ce6e19ac2c8c7e92dc583e90b8415362c280fa4bc6cc0e2cac52. Nov 1 00:29:50.980062 containerd[1462]: time="2025-11-01T00:29:50.979993548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-px2mx,Uid:ad3e40a9-147f-4538-bd0f-9000e7538493,Namespace:kube-system,Attempt:0,} returns sandbox id \"77537ad536e1ce6e19ac2c8c7e92dc583e90b8415362c280fa4bc6cc0e2cac52\"" Nov 1 00:29:50.985639 containerd[1462]: time="2025-11-01T00:29:50.985579498Z" level=info msg="CreateContainer within sandbox \"77537ad536e1ce6e19ac2c8c7e92dc583e90b8415362c280fa4bc6cc0e2cac52\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:29:51.009007 containerd[1462]: time="2025-11-01T00:29:51.008950234Z" level=info msg="CreateContainer within sandbox \"77537ad536e1ce6e19ac2c8c7e92dc583e90b8415362c280fa4bc6cc0e2cac52\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb7b15571317a23f9598dc1550cddb162977de86137eadd41a39018a93c9ceb0\"" Nov 1 00:29:51.012170 containerd[1462]: time="2025-11-01T00:29:51.010676429Z" level=info msg="StartContainer for \"eb7b15571317a23f9598dc1550cddb162977de86137eadd41a39018a93c9ceb0\"" Nov 1 00:29:51.053229 systemd[1]: Started cri-containerd-eb7b15571317a23f9598dc1550cddb162977de86137eadd41a39018a93c9ceb0.scope - libcontainer container eb7b15571317a23f9598dc1550cddb162977de86137eadd41a39018a93c9ceb0. Nov 1 00:29:51.111526 containerd[1462]: time="2025-11-01T00:29:51.111475187Z" level=info msg="StartContainer for \"eb7b15571317a23f9598dc1550cddb162977de86137eadd41a39018a93c9ceb0\" returns successfully" Nov 1 00:29:51.395059 update_engine[1439]: I20251101 00:29:51.391557 1439 update_attempter.cc:509] Updating boot flags... Nov 1 00:29:51.535248 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2767) Nov 1 00:29:51.820093 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2765) Nov 1 00:29:52.948848 containerd[1462]: time="2025-11-01T00:29:52.948779099Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:52.950234 containerd[1462]: time="2025-11-01T00:29:52.950167465Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:29:52.951636 containerd[1462]: time="2025-11-01T00:29:52.951569572Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:52.954630 containerd[1462]: time="2025-11-01T00:29:52.954588244Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:29:52.955819 containerd[1462]: time="2025-11-01T00:29:52.955641026Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.208918955s" Nov 1 00:29:52.955819 containerd[1462]: time="2025-11-01T00:29:52.955686179Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:29:52.958892 containerd[1462]: time="2025-11-01T00:29:52.958732369Z" level=info msg="CreateContainer within sandbox \"b65b422f86a42230a7353095b7f3c4f454cd80ae843b5e91e4842f9124a8ce54\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:29:52.979319 containerd[1462]: time="2025-11-01T00:29:52.979210663Z" level=info msg="CreateContainer within sandbox \"b65b422f86a42230a7353095b7f3c4f454cd80ae843b5e91e4842f9124a8ce54\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"82e093555718e0f671d2beeeec3654d0910e15fe8abe72e6ebec6c9fcc524074\"" Nov 1 00:29:52.981144 containerd[1462]: time="2025-11-01T00:29:52.980661628Z" level=info msg="StartContainer for \"82e093555718e0f671d2beeeec3654d0910e15fe8abe72e6ebec6c9fcc524074\"" Nov 1 00:29:53.042218 systemd[1]: Started cri-containerd-82e093555718e0f671d2beeeec3654d0910e15fe8abe72e6ebec6c9fcc524074.scope - libcontainer container 82e093555718e0f671d2beeeec3654d0910e15fe8abe72e6ebec6c9fcc524074. Nov 1 00:29:53.083079 containerd[1462]: time="2025-11-01T00:29:53.081827256Z" level=info msg="StartContainer for \"82e093555718e0f671d2beeeec3654d0910e15fe8abe72e6ebec6c9fcc524074\" returns successfully" Nov 1 00:29:53.364234 kubelet[2549]: I1101 00:29:53.364140 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-px2mx" podStartSLOduration=4.3641173890000005 podStartE2EDuration="4.364117389s" podCreationTimestamp="2025-11-01 00:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:29:51.376436524 +0000 UTC m=+9.242852844" watchObservedRunningTime="2025-11-01 00:29:53.364117389 +0000 UTC m=+11.230533710" Nov 1 00:30:00.201222 sudo[1706]: pam_unix(sudo:session): session closed for user root Nov 1 00:30:00.246324 sshd[1703]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:00.254504 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:30:00.257509 systemd[1]: sshd@6-10.128.0.44:22-147.75.109.163:46852.service: Deactivated successfully. Nov 1 00:30:00.262652 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:30:00.263170 systemd[1]: session-7.scope: Consumed 6.776s CPU time, 155.4M memory peak, 0B memory swap peak. Nov 1 00:30:00.267306 systemd-logind[1438]: Removed session 7. Nov 1 00:30:07.666946 kubelet[2549]: I1101 00:30:07.665444 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cnvqg" podStartSLOduration=15.45449735 podStartE2EDuration="18.665422044s" podCreationTimestamp="2025-11-01 00:29:49 +0000 UTC" firstStartedPulling="2025-11-01 00:29:49.74589071 +0000 UTC m=+7.612307016" lastFinishedPulling="2025-11-01 00:29:52.956815398 +0000 UTC m=+10.823231710" observedRunningTime="2025-11-01 00:29:53.366276951 +0000 UTC m=+11.232693271" watchObservedRunningTime="2025-11-01 00:30:07.665422044 +0000 UTC m=+25.531838367" Nov 1 00:30:07.682662 systemd[1]: Created slice kubepods-besteffort-pode65ba6db_6a75_49a2_a6f9_0c14344937fb.slice - libcontainer container kubepods-besteffort-pode65ba6db_6a75_49a2_a6f9_0c14344937fb.slice. Nov 1 00:30:07.689301 kubelet[2549]: W1101 00:30:07.688337 2549 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object Nov 1 00:30:07.689301 kubelet[2549]: E1101 00:30:07.689148 2549 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object" logger="UnhandledError" Nov 1 00:30:07.690654 kubelet[2549]: I1101 00:30:07.690387 2549 status_manager.go:890] "Failed to get status for pod" podUID="e65ba6db-6a75-49a2-a6f9-0c14344937fb" pod="calico-system/calico-typha-5f75cd78b9-gj48k" err="pods \"calico-typha-5f75cd78b9-gj48k\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object" Nov 1 00:30:07.692079 kubelet[2549]: W1101 00:30:07.691258 2549 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object Nov 1 00:30:07.692079 kubelet[2549]: W1101 00:30:07.691531 2549 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object Nov 1 00:30:07.692079 kubelet[2549]: E1101 00:30:07.691909 2549 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object" logger="UnhandledError" Nov 1 00:30:07.692409 kubelet[2549]: E1101 00:30:07.692376 2549 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object" logger="UnhandledError" Nov 1 00:30:07.754958 kubelet[2549]: I1101 00:30:07.753890 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e65ba6db-6a75-49a2-a6f9-0c14344937fb-tigera-ca-bundle\") pod \"calico-typha-5f75cd78b9-gj48k\" (UID: \"e65ba6db-6a75-49a2-a6f9-0c14344937fb\") " pod="calico-system/calico-typha-5f75cd78b9-gj48k" Nov 1 00:30:07.755331 kubelet[2549]: I1101 00:30:07.755238 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e65ba6db-6a75-49a2-a6f9-0c14344937fb-typha-certs\") pod \"calico-typha-5f75cd78b9-gj48k\" (UID: \"e65ba6db-6a75-49a2-a6f9-0c14344937fb\") " pod="calico-system/calico-typha-5f75cd78b9-gj48k" Nov 1 00:30:07.755331 kubelet[2549]: I1101 00:30:07.755287 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdbs\" (UniqueName: \"kubernetes.io/projected/e65ba6db-6a75-49a2-a6f9-0c14344937fb-kube-api-access-kxdbs\") pod \"calico-typha-5f75cd78b9-gj48k\" (UID: \"e65ba6db-6a75-49a2-a6f9-0c14344937fb\") " pod="calico-system/calico-typha-5f75cd78b9-gj48k" Nov 1 00:30:07.910090 systemd[1]: Created slice kubepods-besteffort-podb64ecb02_1a47_4341_b749_58d344430179.slice - libcontainer container kubepods-besteffort-podb64ecb02_1a47_4341_b749_58d344430179.slice. Nov 1 00:30:07.956867 kubelet[2549]: I1101 00:30:07.956725 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b64ecb02-1a47-4341-b749-58d344430179-node-certs\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.956867 kubelet[2549]: I1101 00:30:07.956779 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b64ecb02-1a47-4341-b749-58d344430179-var-lib-calico\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.956867 kubelet[2549]: I1101 00:30:07.956808 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bwh\" (UniqueName: \"kubernetes.io/projected/b64ecb02-1a47-4341-b749-58d344430179-kube-api-access-x7bwh\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.958222 kubelet[2549]: I1101 00:30:07.956841 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b64ecb02-1a47-4341-b749-58d344430179-flexvol-driver-host\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.958370 kubelet[2549]: I1101 00:30:07.958338 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b64ecb02-1a47-4341-b749-58d344430179-xtables-lock\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.958445 kubelet[2549]: I1101 00:30:07.958374 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b64ecb02-1a47-4341-b749-58d344430179-cni-net-dir\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.958506 kubelet[2549]: I1101 00:30:07.958442 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b64ecb02-1a47-4341-b749-58d344430179-lib-modules\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.958506 kubelet[2549]: I1101 00:30:07.958492 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b64ecb02-1a47-4341-b749-58d344430179-cni-log-dir\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.958624 kubelet[2549]: I1101 00:30:07.958521 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b64ecb02-1a47-4341-b749-58d344430179-policysync\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.958624 kubelet[2549]: I1101 00:30:07.958603 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b64ecb02-1a47-4341-b749-58d344430179-var-run-calico\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.958739 kubelet[2549]: I1101 00:30:07.958674 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b64ecb02-1a47-4341-b749-58d344430179-cni-bin-dir\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:07.958801 kubelet[2549]: I1101 00:30:07.958704 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b64ecb02-1a47-4341-b749-58d344430179-tigera-ca-bundle\") pod \"calico-node-d7jlj\" (UID: \"b64ecb02-1a47-4341-b749-58d344430179\") " pod="calico-system/calico-node-d7jlj" Nov 1 00:30:08.072210 kubelet[2549]: E1101 00:30:08.072169 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.072210 kubelet[2549]: W1101 00:30:08.072204 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.072421 kubelet[2549]: E1101 00:30:08.072251 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.127870 kubelet[2549]: E1101 00:30:08.127806 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:08.133412 kubelet[2549]: E1101 00:30:08.133289 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.133412 kubelet[2549]: W1101 00:30:08.133319 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.133412 kubelet[2549]: E1101 00:30:08.133346 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.133780 kubelet[2549]: E1101 00:30:08.133702 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.133780 kubelet[2549]: W1101 00:30:08.133722 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.133780 kubelet[2549]: E1101 00:30:08.133740 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.134273 kubelet[2549]: E1101 00:30:08.134182 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.134273 kubelet[2549]: W1101 00:30:08.134198 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.134273 kubelet[2549]: E1101 00:30:08.134217 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.134642 kubelet[2549]: E1101 00:30:08.134620 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.134642 kubelet[2549]: W1101 00:30:08.134643 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.134814 kubelet[2549]: E1101 00:30:08.134661 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.135311 kubelet[2549]: E1101 00:30:08.135223 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.135311 kubelet[2549]: W1101 00:30:08.135244 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.135311 kubelet[2549]: E1101 00:30:08.135263 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.136399 kubelet[2549]: E1101 00:30:08.136361 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.136399 kubelet[2549]: W1101 00:30:08.136390 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.136558 kubelet[2549]: E1101 00:30:08.136410 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.137389 kubelet[2549]: E1101 00:30:08.137249 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.137389 kubelet[2549]: W1101 00:30:08.137270 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.137389 kubelet[2549]: E1101 00:30:08.137288 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.139404 kubelet[2549]: E1101 00:30:08.139275 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.139404 kubelet[2549]: W1101 00:30:08.139295 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.139404 kubelet[2549]: E1101 00:30:08.139316 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.139894 kubelet[2549]: E1101 00:30:08.139670 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.139894 kubelet[2549]: W1101 00:30:08.139688 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.139894 kubelet[2549]: E1101 00:30:08.139705 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.140802 kubelet[2549]: E1101 00:30:08.140774 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.140802 kubelet[2549]: W1101 00:30:08.140799 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.140975 kubelet[2549]: E1101 00:30:08.140818 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.142048 kubelet[2549]: E1101 00:30:08.141889 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.142048 kubelet[2549]: W1101 00:30:08.141909 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.142048 kubelet[2549]: E1101 00:30:08.141928 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.144534 kubelet[2549]: E1101 00:30:08.144495 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.144534 kubelet[2549]: W1101 00:30:08.144520 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.144697 kubelet[2549]: E1101 00:30:08.144538 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.145825 kubelet[2549]: E1101 00:30:08.145620 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.145825 kubelet[2549]: W1101 00:30:08.145648 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.145825 kubelet[2549]: E1101 00:30:08.145668 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.146145 kubelet[2549]: E1101 00:30:08.146030 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.146145 kubelet[2549]: W1101 00:30:08.146047 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.146145 kubelet[2549]: E1101 00:30:08.146065 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.146844 kubelet[2549]: E1101 00:30:08.146816 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.146844 kubelet[2549]: W1101 00:30:08.146841 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.146983 kubelet[2549]: E1101 00:30:08.146859 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.148054 kubelet[2549]: E1101 00:30:08.147992 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.148054 kubelet[2549]: W1101 00:30:08.148041 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.148230 kubelet[2549]: E1101 00:30:08.148061 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.149168 kubelet[2549]: E1101 00:30:08.149138 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.149168 kubelet[2549]: W1101 00:30:08.149163 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.149319 kubelet[2549]: E1101 00:30:08.149187 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.149602 kubelet[2549]: E1101 00:30:08.149576 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.149602 kubelet[2549]: W1101 00:30:08.149601 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.149757 kubelet[2549]: E1101 00:30:08.149618 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.150341 kubelet[2549]: E1101 00:30:08.150321 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.150601 kubelet[2549]: W1101 00:30:08.150444 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.150601 kubelet[2549]: E1101 00:30:08.150467 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.150971 kubelet[2549]: E1101 00:30:08.150935 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.151225 kubelet[2549]: W1101 00:30:08.151114 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.151225 kubelet[2549]: E1101 00:30:08.151139 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.162627 kubelet[2549]: E1101 00:30:08.162553 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.162627 kubelet[2549]: W1101 00:30:08.162575 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.162627 kubelet[2549]: E1101 00:30:08.162596 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.163176 kubelet[2549]: I1101 00:30:08.162917 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f2c53676-0b50-4c2c-9234-572240cab45e-varrun\") pod \"csi-node-driver-9v2bt\" (UID: \"f2c53676-0b50-4c2c-9234-572240cab45e\") " pod="calico-system/csi-node-driver-9v2bt" Nov 1 00:30:08.163455 kubelet[2549]: E1101 00:30:08.163318 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.163455 kubelet[2549]: W1101 00:30:08.163332 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.163455 kubelet[2549]: E1101 00:30:08.163366 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.164208 kubelet[2549]: E1101 00:30:08.164063 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.164208 kubelet[2549]: W1101 00:30:08.164081 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.164208 kubelet[2549]: E1101 00:30:08.164113 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.165043 kubelet[2549]: E1101 00:30:08.164878 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.165043 kubelet[2549]: W1101 00:30:08.164898 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.165043 kubelet[2549]: E1101 00:30:08.164916 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.165043 kubelet[2549]: I1101 00:30:08.164979 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f2c53676-0b50-4c2c-9234-572240cab45e-socket-dir\") pod \"csi-node-driver-9v2bt\" (UID: \"f2c53676-0b50-4c2c-9234-572240cab45e\") " pod="calico-system/csi-node-driver-9v2bt" Nov 1 00:30:08.166279 kubelet[2549]: E1101 00:30:08.165850 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.166279 kubelet[2549]: W1101 00:30:08.165872 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.166279 kubelet[2549]: E1101 00:30:08.165922 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.166910 kubelet[2549]: E1101 00:30:08.166733 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.166910 kubelet[2549]: W1101 00:30:08.166753 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.166910 kubelet[2549]: E1101 00:30:08.166777 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.167543 kubelet[2549]: E1101 00:30:08.167321 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.167543 kubelet[2549]: W1101 00:30:08.167376 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.167543 kubelet[2549]: E1101 00:30:08.167401 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.167994 kubelet[2549]: I1101 00:30:08.167757 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjk5v\" (UniqueName: \"kubernetes.io/projected/f2c53676-0b50-4c2c-9234-572240cab45e-kube-api-access-hjk5v\") pod \"csi-node-driver-9v2bt\" (UID: \"f2c53676-0b50-4c2c-9234-572240cab45e\") " pod="calico-system/csi-node-driver-9v2bt" Nov 1 00:30:08.168315 kubelet[2549]: E1101 00:30:08.168295 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.168720 kubelet[2549]: W1101 00:30:08.168435 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.168720 kubelet[2549]: E1101 00:30:08.168516 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.169240 kubelet[2549]: I1101 00:30:08.168549 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f2c53676-0b50-4c2c-9234-572240cab45e-registration-dir\") pod \"csi-node-driver-9v2bt\" (UID: \"f2c53676-0b50-4c2c-9234-572240cab45e\") " pod="calico-system/csi-node-driver-9v2bt" Nov 1 00:30:08.169461 kubelet[2549]: E1101 00:30:08.169359 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.169461 kubelet[2549]: W1101 00:30:08.169378 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.169958 kubelet[2549]: E1101 00:30:08.169663 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.170582 kubelet[2549]: E1101 00:30:08.170415 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.170582 kubelet[2549]: W1101 00:30:08.170435 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.170582 kubelet[2549]: E1101 00:30:08.170490 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.170582 kubelet[2549]: I1101 00:30:08.170519 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2c53676-0b50-4c2c-9234-572240cab45e-kubelet-dir\") pod \"csi-node-driver-9v2bt\" (UID: \"f2c53676-0b50-4c2c-9234-572240cab45e\") " pod="calico-system/csi-node-driver-9v2bt" Nov 1 00:30:08.171583 kubelet[2549]: E1101 00:30:08.171296 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.171583 kubelet[2549]: W1101 00:30:08.171316 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.171583 kubelet[2549]: E1101 00:30:08.171383 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.172140 kubelet[2549]: E1101 00:30:08.172070 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.172140 kubelet[2549]: W1101 00:30:08.172090 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.172140 kubelet[2549]: E1101 00:30:08.172107 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.173086 kubelet[2549]: E1101 00:30:08.172993 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.173086 kubelet[2549]: W1101 00:30:08.173059 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.173483 kubelet[2549]: E1101 00:30:08.173296 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.174524 kubelet[2549]: E1101 00:30:08.174230 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.174524 kubelet[2549]: W1101 00:30:08.174250 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.174524 kubelet[2549]: E1101 00:30:08.174268 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.176228 kubelet[2549]: E1101 00:30:08.175998 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.176228 kubelet[2549]: W1101 00:30:08.176041 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.176228 kubelet[2549]: E1101 00:30:08.176060 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.274095 kubelet[2549]: E1101 00:30:08.273496 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.274095 kubelet[2549]: W1101 00:30:08.273527 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.274095 kubelet[2549]: E1101 00:30:08.273558 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.274095 kubelet[2549]: E1101 00:30:08.274076 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.274095 kubelet[2549]: W1101 00:30:08.274093 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.274463 kubelet[2549]: E1101 00:30:08.274131 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.276207 kubelet[2549]: E1101 00:30:08.276176 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.276207 kubelet[2549]: W1101 00:30:08.276203 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.276386 kubelet[2549]: E1101 00:30:08.276251 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.276858 kubelet[2549]: E1101 00:30:08.276642 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.276858 kubelet[2549]: W1101 00:30:08.276662 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.276858 kubelet[2549]: E1101 00:30:08.276699 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.277597 kubelet[2549]: E1101 00:30:08.277495 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.277597 kubelet[2549]: W1101 00:30:08.277518 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.277597 kubelet[2549]: E1101 00:30:08.277559 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.280033 kubelet[2549]: E1101 00:30:08.279248 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.280033 kubelet[2549]: W1101 00:30:08.279273 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.280701 kubelet[2549]: E1101 00:30:08.280668 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.283557 kubelet[2549]: E1101 00:30:08.283527 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.283557 kubelet[2549]: W1101 00:30:08.283555 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.285203 kubelet[2549]: E1101 00:30:08.285170 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.286108 kubelet[2549]: E1101 00:30:08.286081 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.286108 kubelet[2549]: W1101 00:30:08.286107 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.286267 kubelet[2549]: E1101 00:30:08.286136 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.288134 kubelet[2549]: E1101 00:30:08.288092 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.288134 kubelet[2549]: W1101 00:30:08.288117 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.288282 kubelet[2549]: E1101 00:30:08.288213 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.290033 kubelet[2549]: E1101 00:30:08.288509 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.290033 kubelet[2549]: W1101 00:30:08.288527 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.290033 kubelet[2549]: E1101 00:30:08.288621 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.290033 kubelet[2549]: E1101 00:30:08.289214 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.290033 kubelet[2549]: W1101 00:30:08.289230 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.290033 kubelet[2549]: E1101 00:30:08.289609 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.290365 kubelet[2549]: E1101 00:30:08.290224 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.290365 kubelet[2549]: W1101 00:30:08.290241 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.291789 kubelet[2549]: E1101 00:30:08.291758 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.292721 kubelet[2549]: E1101 00:30:08.292696 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.292721 kubelet[2549]: W1101 00:30:08.292720 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.292874 kubelet[2549]: E1101 00:30:08.292815 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.293163 kubelet[2549]: E1101 00:30:08.293142 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.293163 kubelet[2549]: W1101 00:30:08.293163 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.294105 kubelet[2549]: E1101 00:30:08.294076 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.294212 kubelet[2549]: E1101 00:30:08.294183 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.294212 kubelet[2549]: W1101 00:30:08.294196 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.294328 kubelet[2549]: E1101 00:30:08.294288 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.294679 kubelet[2549]: E1101 00:30:08.294657 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.294679 kubelet[2549]: W1101 00:30:08.294678 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.294798 kubelet[2549]: E1101 00:30:08.294775 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.295310 kubelet[2549]: E1101 00:30:08.295286 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.295310 kubelet[2549]: W1101 00:30:08.295308 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.296147 kubelet[2549]: E1101 00:30:08.296121 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.297036 kubelet[2549]: E1101 00:30:08.296428 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.297036 kubelet[2549]: W1101 00:30:08.296444 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.297036 kubelet[2549]: E1101 00:30:08.296534 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.297036 kubelet[2549]: E1101 00:30:08.296812 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.297036 kubelet[2549]: W1101 00:30:08.296823 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.297036 kubelet[2549]: E1101 00:30:08.296922 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.297518 kubelet[2549]: E1101 00:30:08.297494 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.297518 kubelet[2549]: W1101 00:30:08.297518 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.297660 kubelet[2549]: E1101 00:30:08.297541 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.298177 kubelet[2549]: E1101 00:30:08.298153 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.298177 kubelet[2549]: W1101 00:30:08.298175 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.298433 kubelet[2549]: E1101 00:30:08.298407 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.299991 kubelet[2549]: E1101 00:30:08.299078 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.299991 kubelet[2549]: W1101 00:30:08.299098 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.299991 kubelet[2549]: E1101 00:30:08.299333 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.300336 kubelet[2549]: E1101 00:30:08.300288 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.300336 kubelet[2549]: W1101 00:30:08.300313 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.300741 kubelet[2549]: E1101 00:30:08.300711 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.301152 kubelet[2549]: E1101 00:30:08.301126 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.301245 kubelet[2549]: W1101 00:30:08.301153 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.301924 kubelet[2549]: E1101 00:30:08.301891 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.303747 kubelet[2549]: E1101 00:30:08.303715 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.303747 kubelet[2549]: W1101 00:30:08.303744 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.303904 kubelet[2549]: E1101 00:30:08.303763 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.856631 kubelet[2549]: E1101 00:30:08.856589 2549 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:08.858418 kubelet[2549]: E1101 00:30:08.856698 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e65ba6db-6a75-49a2-a6f9-0c14344937fb-tigera-ca-bundle podName:e65ba6db-6a75-49a2-a6f9-0c14344937fb nodeName:}" failed. No retries permitted until 2025-11-01 00:30:09.356672197 +0000 UTC m=+27.223088506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/e65ba6db-6a75-49a2-a6f9-0c14344937fb-tigera-ca-bundle") pod "calico-typha-5f75cd78b9-gj48k" (UID: "e65ba6db-6a75-49a2-a6f9-0c14344937fb") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:08.858418 kubelet[2549]: E1101 00:30:08.856602 2549 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Nov 1 00:30:08.858418 kubelet[2549]: E1101 00:30:08.856994 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e65ba6db-6a75-49a2-a6f9-0c14344937fb-typha-certs podName:e65ba6db-6a75-49a2-a6f9-0c14344937fb nodeName:}" failed. No retries permitted until 2025-11-01 00:30:09.356972808 +0000 UTC m=+27.223389120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/e65ba6db-6a75-49a2-a6f9-0c14344937fb-typha-certs") pod "calico-typha-5f75cd78b9-gj48k" (UID: "e65ba6db-6a75-49a2-a6f9-0c14344937fb") : failed to sync secret cache: timed out waiting for the condition Nov 1 00:30:08.883535 kubelet[2549]: E1101 00:30:08.883486 2549 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:08.883535 kubelet[2549]: E1101 00:30:08.883531 2549 projected.go:194] Error preparing data for projected volume kube-api-access-kxdbs for pod calico-system/calico-typha-5f75cd78b9-gj48k: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:08.883737 kubelet[2549]: E1101 00:30:08.883625 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e65ba6db-6a75-49a2-a6f9-0c14344937fb-kube-api-access-kxdbs podName:e65ba6db-6a75-49a2-a6f9-0c14344937fb nodeName:}" failed. No retries permitted until 2025-11-01 00:30:09.383603267 +0000 UTC m=+27.250019583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kxdbs" (UniqueName: "kubernetes.io/projected/e65ba6db-6a75-49a2-a6f9-0c14344937fb-kube-api-access-kxdbs") pod "calico-typha-5f75cd78b9-gj48k" (UID: "e65ba6db-6a75-49a2-a6f9-0c14344937fb") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:08.897210 kubelet[2549]: E1101 00:30:08.897174 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.897210 kubelet[2549]: W1101 00:30:08.897204 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.897433 kubelet[2549]: E1101 00:30:08.897234 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.897623 kubelet[2549]: E1101 00:30:08.897600 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.897623 kubelet[2549]: W1101 00:30:08.897623 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.897775 kubelet[2549]: E1101 00:30:08.897643 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.898046 kubelet[2549]: E1101 00:30:08.897990 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.898046 kubelet[2549]: W1101 00:30:08.898034 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.898187 kubelet[2549]: E1101 00:30:08.898064 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.968791 kubelet[2549]: E1101 00:30:08.968606 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.968791 kubelet[2549]: W1101 00:30:08.968633 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.968791 kubelet[2549]: E1101 00:30:08.968661 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.970084 kubelet[2549]: E1101 00:30:08.969375 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.970084 kubelet[2549]: W1101 00:30:08.969399 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.970084 kubelet[2549]: E1101 00:30:08.970043 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.999476 kubelet[2549]: E1101 00:30:08.999442 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.999476 kubelet[2549]: W1101 00:30:08.999471 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.999768 kubelet[2549]: E1101 00:30:08.999499 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:08.999900 kubelet[2549]: E1101 00:30:08.999878 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:08.999983 kubelet[2549]: W1101 00:30:08.999899 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:08.999983 kubelet[2549]: E1101 00:30:08.999918 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.000294 kubelet[2549]: E1101 00:30:09.000275 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.000390 kubelet[2549]: W1101 00:30:09.000299 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.000390 kubelet[2549]: E1101 00:30:09.000318 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.061306 kubelet[2549]: E1101 00:30:09.061250 2549 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:09.061474 kubelet[2549]: E1101 00:30:09.061381 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b64ecb02-1a47-4341-b749-58d344430179-tigera-ca-bundle podName:b64ecb02-1a47-4341-b749-58d344430179 nodeName:}" failed. No retries permitted until 2025-11-01 00:30:09.561348149 +0000 UTC m=+27.427764463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/b64ecb02-1a47-4341-b749-58d344430179-tigera-ca-bundle") pod "calico-node-d7jlj" (UID: "b64ecb02-1a47-4341-b749-58d344430179") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:09.101885 kubelet[2549]: E1101 00:30:09.101798 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.101885 kubelet[2549]: W1101 00:30:09.101829 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.102435 kubelet[2549]: E1101 00:30:09.101966 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.102866 kubelet[2549]: E1101 00:30:09.102849 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.103100 kubelet[2549]: W1101 00:30:09.102913 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.103100 kubelet[2549]: E1101 00:30:09.102937 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.103814 kubelet[2549]: E1101 00:30:09.103620 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.103814 kubelet[2549]: W1101 00:30:09.103640 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.103814 kubelet[2549]: E1101 00:30:09.103660 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.104427 kubelet[2549]: E1101 00:30:09.104309 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.104427 kubelet[2549]: W1101 00:30:09.104331 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.104427 kubelet[2549]: E1101 00:30:09.104349 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.206136 kubelet[2549]: E1101 00:30:09.205973 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.206136 kubelet[2549]: W1101 00:30:09.206039 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.206136 kubelet[2549]: E1101 00:30:09.206072 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.208045 kubelet[2549]: E1101 00:30:09.206466 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.208045 kubelet[2549]: W1101 00:30:09.206486 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.208045 kubelet[2549]: E1101 00:30:09.206508 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.208045 kubelet[2549]: E1101 00:30:09.206852 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.208045 kubelet[2549]: W1101 00:30:09.206865 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.208045 kubelet[2549]: E1101 00:30:09.206880 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.208045 kubelet[2549]: E1101 00:30:09.207215 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.208045 kubelet[2549]: W1101 00:30:09.207229 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.208045 kubelet[2549]: E1101 00:30:09.207262 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.309044 kubelet[2549]: E1101 00:30:09.308963 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.309044 kubelet[2549]: W1101 00:30:09.308995 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.309044 kubelet[2549]: E1101 00:30:09.309044 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.309577 kubelet[2549]: E1101 00:30:09.309556 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.309669 kubelet[2549]: W1101 00:30:09.309577 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.309669 kubelet[2549]: E1101 00:30:09.309628 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.310081 kubelet[2549]: E1101 00:30:09.310056 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.310081 kubelet[2549]: W1101 00:30:09.310077 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.310270 kubelet[2549]: E1101 00:30:09.310098 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.310609 kubelet[2549]: E1101 00:30:09.310486 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.310609 kubelet[2549]: W1101 00:30:09.310518 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.310609 kubelet[2549]: E1101 00:30:09.310538 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.411489 kubelet[2549]: E1101 00:30:09.411456 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.411489 kubelet[2549]: W1101 00:30:09.411481 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.411779 kubelet[2549]: E1101 00:30:09.411510 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.411943 kubelet[2549]: E1101 00:30:09.411923 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.411943 kubelet[2549]: W1101 00:30:09.411943 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.412131 kubelet[2549]: E1101 00:30:09.411978 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.412426 kubelet[2549]: E1101 00:30:09.412400 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.412426 kubelet[2549]: W1101 00:30:09.412423 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.412577 kubelet[2549]: E1101 00:30:09.412463 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.412851 kubelet[2549]: E1101 00:30:09.412827 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.412851 kubelet[2549]: W1101 00:30:09.412851 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.413002 kubelet[2549]: E1101 00:30:09.412877 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.413246 kubelet[2549]: E1101 00:30:09.413224 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.413246 kubelet[2549]: W1101 00:30:09.413244 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.413487 kubelet[2549]: E1101 00:30:09.413281 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.413690 kubelet[2549]: E1101 00:30:09.413668 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.413690 kubelet[2549]: W1101 00:30:09.413688 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.414141 kubelet[2549]: E1101 00:30:09.413774 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.414141 kubelet[2549]: E1101 00:30:09.413995 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.414141 kubelet[2549]: W1101 00:30:09.414039 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.414455 kubelet[2549]: E1101 00:30:09.414339 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.414455 kubelet[2549]: W1101 00:30:09.414359 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.414455 kubelet[2549]: E1101 00:30:09.414377 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.416043 kubelet[2549]: E1101 00:30:09.414670 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.416043 kubelet[2549]: W1101 00:30:09.414689 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.416043 kubelet[2549]: E1101 00:30:09.414707 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.416043 kubelet[2549]: E1101 00:30:09.415000 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.416043 kubelet[2549]: W1101 00:30:09.415044 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.416043 kubelet[2549]: E1101 00:30:09.415062 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.416043 kubelet[2549]: E1101 00:30:09.415395 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.416043 kubelet[2549]: W1101 00:30:09.415410 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.416043 kubelet[2549]: E1101 00:30:09.415426 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.421024 kubelet[2549]: E1101 00:30:09.419746 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.421397 kubelet[2549]: E1101 00:30:09.421370 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.421397 kubelet[2549]: W1101 00:30:09.421396 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.421536 kubelet[2549]: E1101 00:30:09.421416 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.425032 kubelet[2549]: E1101 00:30:09.424323 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.425032 kubelet[2549]: W1101 00:30:09.424343 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.425032 kubelet[2549]: E1101 00:30:09.424366 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.425032 kubelet[2549]: E1101 00:30:09.424641 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.425032 kubelet[2549]: W1101 00:30:09.424653 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.425032 kubelet[2549]: E1101 00:30:09.424667 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.425377 kubelet[2549]: E1101 00:30:09.425078 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.425377 kubelet[2549]: W1101 00:30:09.425093 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.425377 kubelet[2549]: E1101 00:30:09.425109 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.427041 kubelet[2549]: E1101 00:30:09.425810 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.427041 kubelet[2549]: W1101 00:30:09.426043 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.427041 kubelet[2549]: E1101 00:30:09.426060 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.427708 kubelet[2549]: E1101 00:30:09.427551 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.427708 kubelet[2549]: W1101 00:30:09.427570 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.427708 kubelet[2549]: E1101 00:30:09.427588 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.432043 kubelet[2549]: E1101 00:30:09.430246 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.432405 kubelet[2549]: W1101 00:30:09.432173 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.432405 kubelet[2549]: E1101 00:30:09.432212 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.432977 kubelet[2549]: E1101 00:30:09.432853 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.432977 kubelet[2549]: W1101 00:30:09.432874 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.432977 kubelet[2549]: E1101 00:30:09.432893 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.492934 containerd[1462]: time="2025-11-01T00:30:09.492768323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f75cd78b9-gj48k,Uid:e65ba6db-6a75-49a2-a6f9-0c14344937fb,Namespace:calico-system,Attempt:0,}" Nov 1 00:30:09.523098 kubelet[2549]: E1101 00:30:09.522591 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.523098 kubelet[2549]: W1101 00:30:09.522622 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.523098 kubelet[2549]: E1101 00:30:09.522655 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.530553 containerd[1462]: time="2025-11-01T00:30:09.530413849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:09.530553 containerd[1462]: time="2025-11-01T00:30:09.530476533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:09.530553 containerd[1462]: time="2025-11-01T00:30:09.530494258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:09.530943 containerd[1462]: time="2025-11-01T00:30:09.530597280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:09.569208 systemd[1]: Started cri-containerd-15c12adb80e82c30f4da042b2c39e1caaf778338a80e4e58e8b712a2eac73a2e.scope - libcontainer container 15c12adb80e82c30f4da042b2c39e1caaf778338a80e4e58e8b712a2eac73a2e. Nov 1 00:30:09.625055 kubelet[2549]: E1101 00:30:09.623656 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.625055 kubelet[2549]: W1101 00:30:09.623685 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.625055 kubelet[2549]: E1101 00:30:09.623717 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.625055 kubelet[2549]: E1101 00:30:09.624428 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.625055 kubelet[2549]: W1101 00:30:09.624478 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.625055 kubelet[2549]: E1101 00:30:09.624504 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.625055 kubelet[2549]: E1101 00:30:09.624985 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.625055 kubelet[2549]: W1101 00:30:09.625001 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.625055 kubelet[2549]: E1101 00:30:09.625060 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.625615 kubelet[2549]: E1101 00:30:09.625533 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.625615 kubelet[2549]: W1101 00:30:09.625576 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.625615 kubelet[2549]: E1101 00:30:09.625594 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.626096 kubelet[2549]: E1101 00:30:09.626073 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.626208 kubelet[2549]: W1101 00:30:09.626095 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.626208 kubelet[2549]: E1101 00:30:09.626133 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.632108 containerd[1462]: time="2025-11-01T00:30:09.632061123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f75cd78b9-gj48k,Uid:e65ba6db-6a75-49a2-a6f9-0c14344937fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"15c12adb80e82c30f4da042b2c39e1caaf778338a80e4e58e8b712a2eac73a2e\"" Nov 1 00:30:09.634461 kubelet[2549]: E1101 00:30:09.634433 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:09.634461 kubelet[2549]: W1101 00:30:09.634462 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:09.634626 kubelet[2549]: E1101 00:30:09.634480 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:09.635922 containerd[1462]: time="2025-11-01T00:30:09.635889361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:30:09.716252 containerd[1462]: time="2025-11-01T00:30:09.716179872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d7jlj,Uid:b64ecb02-1a47-4341-b749-58d344430179,Namespace:calico-system,Attempt:0,}" Nov 1 00:30:09.757946 containerd[1462]: time="2025-11-01T00:30:09.757562682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:09.757946 containerd[1462]: time="2025-11-01T00:30:09.757780536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:09.758268 containerd[1462]: time="2025-11-01T00:30:09.757836842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:09.763727 containerd[1462]: time="2025-11-01T00:30:09.763572034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:09.788238 systemd[1]: Started cri-containerd-80f8032a30ecfa6ecee8b8e456f8a3d92cc329f463c0e61102bce254903736a9.scope - libcontainer container 80f8032a30ecfa6ecee8b8e456f8a3d92cc329f463c0e61102bce254903736a9. Nov 1 00:30:09.827192 containerd[1462]: time="2025-11-01T00:30:09.827136095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d7jlj,Uid:b64ecb02-1a47-4341-b749-58d344430179,Namespace:calico-system,Attempt:0,} returns sandbox id \"80f8032a30ecfa6ecee8b8e456f8a3d92cc329f463c0e61102bce254903736a9\"" Nov 1 00:30:10.303073 kubelet[2549]: E1101 00:30:10.302396 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:10.921104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970302039.mount: Deactivated successfully. Nov 1 00:30:11.923789 containerd[1462]: time="2025-11-01T00:30:11.923708811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:11.925054 containerd[1462]: time="2025-11-01T00:30:11.924907596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:30:11.926223 containerd[1462]: time="2025-11-01T00:30:11.926158928Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:11.929099 containerd[1462]: time="2025-11-01T00:30:11.929061486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:11.930693 containerd[1462]: time="2025-11-01T00:30:11.929956699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.294020923s" Nov 1 00:30:11.930693 containerd[1462]: time="2025-11-01T00:30:11.930003707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:30:11.931952 containerd[1462]: time="2025-11-01T00:30:11.931912914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:30:11.955354 containerd[1462]: time="2025-11-01T00:30:11.955212505Z" level=info msg="CreateContainer within sandbox \"15c12adb80e82c30f4da042b2c39e1caaf778338a80e4e58e8b712a2eac73a2e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:30:11.971161 containerd[1462]: time="2025-11-01T00:30:11.971082680Z" level=info msg="CreateContainer within sandbox \"15c12adb80e82c30f4da042b2c39e1caaf778338a80e4e58e8b712a2eac73a2e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b3d4aad7b5e66fb4aa843790a9ff03624aa93883fb36bed2d82be57de6387f8\"" Nov 1 00:30:11.972149 containerd[1462]: time="2025-11-01T00:30:11.972091925Z" level=info msg="StartContainer for \"2b3d4aad7b5e66fb4aa843790a9ff03624aa93883fb36bed2d82be57de6387f8\"" Nov 1 00:30:12.022338 systemd[1]: Started cri-containerd-2b3d4aad7b5e66fb4aa843790a9ff03624aa93883fb36bed2d82be57de6387f8.scope - libcontainer container 2b3d4aad7b5e66fb4aa843790a9ff03624aa93883fb36bed2d82be57de6387f8. Nov 1 00:30:12.087605 containerd[1462]: time="2025-11-01T00:30:12.087552803Z" level=info msg="StartContainer for \"2b3d4aad7b5e66fb4aa843790a9ff03624aa93883fb36bed2d82be57de6387f8\" returns successfully" Nov 1 00:30:12.304756 kubelet[2549]: E1101 00:30:12.304332 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:12.444413 kubelet[2549]: I1101 00:30:12.444320 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f75cd78b9-gj48k" podStartSLOduration=3.148307129 podStartE2EDuration="5.444267195s" podCreationTimestamp="2025-11-01 00:30:07 +0000 UTC" firstStartedPulling="2025-11-01 00:30:09.635460921 +0000 UTC m=+27.501877230" lastFinishedPulling="2025-11-01 00:30:11.931420984 +0000 UTC m=+29.797837296" observedRunningTime="2025-11-01 00:30:12.444110754 +0000 UTC m=+30.310527086" watchObservedRunningTime="2025-11-01 00:30:12.444267195 +0000 UTC m=+30.310683509" Nov 1 00:30:12.480332 kubelet[2549]: E1101 00:30:12.480240 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.480332 kubelet[2549]: W1101 00:30:12.480297 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.480332 kubelet[2549]: E1101 00:30:12.480333 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.480982 kubelet[2549]: E1101 00:30:12.480848 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.480982 kubelet[2549]: W1101 00:30:12.480894 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.480982 kubelet[2549]: E1101 00:30:12.480922 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.483929 kubelet[2549]: E1101 00:30:12.482446 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.483929 kubelet[2549]: W1101 00:30:12.482471 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.483929 kubelet[2549]: E1101 00:30:12.482513 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.483929 kubelet[2549]: E1101 00:30:12.483612 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.483929 kubelet[2549]: W1101 00:30:12.483629 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.483929 kubelet[2549]: E1101 00:30:12.483647 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.484821 kubelet[2549]: E1101 00:30:12.484796 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.484821 kubelet[2549]: W1101 00:30:12.484817 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.485324 kubelet[2549]: E1101 00:30:12.484836 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.485324 kubelet[2549]: E1101 00:30:12.485195 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.485324 kubelet[2549]: W1101 00:30:12.485212 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.485324 kubelet[2549]: E1101 00:30:12.485230 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.486240 kubelet[2549]: E1101 00:30:12.485594 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.486240 kubelet[2549]: W1101 00:30:12.485610 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.486240 kubelet[2549]: E1101 00:30:12.485628 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.486618 kubelet[2549]: E1101 00:30:12.486591 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.486618 kubelet[2549]: W1101 00:30:12.486617 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.486740 kubelet[2549]: E1101 00:30:12.486662 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.487174 kubelet[2549]: E1101 00:30:12.487149 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.487174 kubelet[2549]: W1101 00:30:12.487174 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.487349 kubelet[2549]: E1101 00:30:12.487192 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.487529 kubelet[2549]: E1101 00:30:12.487508 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.487529 kubelet[2549]: W1101 00:30:12.487527 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.487662 kubelet[2549]: E1101 00:30:12.487544 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.487928 kubelet[2549]: E1101 00:30:12.487908 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.487928 kubelet[2549]: W1101 00:30:12.487927 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.488129 kubelet[2549]: E1101 00:30:12.487944 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.488395 kubelet[2549]: E1101 00:30:12.488375 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.488395 kubelet[2549]: W1101 00:30:12.488394 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.488534 kubelet[2549]: E1101 00:30:12.488411 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.488740 kubelet[2549]: E1101 00:30:12.488718 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.488740 kubelet[2549]: W1101 00:30:12.488738 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.488889 kubelet[2549]: E1101 00:30:12.488754 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.489092 kubelet[2549]: E1101 00:30:12.489071 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.489092 kubelet[2549]: W1101 00:30:12.489090 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.489222 kubelet[2549]: E1101 00:30:12.489107 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.489492 kubelet[2549]: E1101 00:30:12.489472 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.489492 kubelet[2549]: W1101 00:30:12.489490 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.489627 kubelet[2549]: E1101 00:30:12.489506 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.549051 kubelet[2549]: E1101 00:30:12.548967 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.549051 kubelet[2549]: W1101 00:30:12.549005 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.549051 kubelet[2549]: E1101 00:30:12.549063 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.549523 kubelet[2549]: E1101 00:30:12.549494 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.549523 kubelet[2549]: W1101 00:30:12.549520 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.549719 kubelet[2549]: E1101 00:30:12.549558 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.550091 kubelet[2549]: E1101 00:30:12.550048 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.550091 kubelet[2549]: W1101 00:30:12.550074 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.550253 kubelet[2549]: E1101 00:30:12.550107 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.550569 kubelet[2549]: E1101 00:30:12.550543 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.550569 kubelet[2549]: W1101 00:30:12.550566 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.550718 kubelet[2549]: E1101 00:30:12.550676 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.551070 kubelet[2549]: E1101 00:30:12.551045 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.551070 kubelet[2549]: W1101 00:30:12.551067 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.551367 kubelet[2549]: E1101 00:30:12.551225 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.551458 kubelet[2549]: E1101 00:30:12.551436 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.551531 kubelet[2549]: W1101 00:30:12.551459 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.551641 kubelet[2549]: E1101 00:30:12.551609 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.551901 kubelet[2549]: E1101 00:30:12.551878 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.551901 kubelet[2549]: W1101 00:30:12.551899 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.552051 kubelet[2549]: E1101 00:30:12.551941 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.552349 kubelet[2549]: E1101 00:30:12.552327 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.552349 kubelet[2549]: W1101 00:30:12.552348 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.552543 kubelet[2549]: E1101 00:30:12.552396 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.552980 kubelet[2549]: E1101 00:30:12.552954 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.552980 kubelet[2549]: W1101 00:30:12.552977 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.553322 kubelet[2549]: E1101 00:30:12.553220 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.554186 kubelet[2549]: E1101 00:30:12.554162 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.554186 kubelet[2549]: W1101 00:30:12.554185 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.554383 kubelet[2549]: E1101 00:30:12.554331 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.556833 kubelet[2549]: E1101 00:30:12.554746 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.556833 kubelet[2549]: W1101 00:30:12.554769 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.556833 kubelet[2549]: E1101 00:30:12.554814 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.556833 kubelet[2549]: E1101 00:30:12.555219 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.556833 kubelet[2549]: W1101 00:30:12.555237 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.556833 kubelet[2549]: E1101 00:30:12.555393 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.556833 kubelet[2549]: E1101 00:30:12.555657 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.556833 kubelet[2549]: W1101 00:30:12.555674 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.556833 kubelet[2549]: E1101 00:30:12.555718 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.556833 kubelet[2549]: E1101 00:30:12.556831 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.561234 kubelet[2549]: W1101 00:30:12.556849 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.561234 kubelet[2549]: E1101 00:30:12.556894 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.561234 kubelet[2549]: E1101 00:30:12.557593 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.561234 kubelet[2549]: W1101 00:30:12.557612 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.561234 kubelet[2549]: E1101 00:30:12.557723 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.561234 kubelet[2549]: E1101 00:30:12.558186 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.561234 kubelet[2549]: W1101 00:30:12.558204 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.561234 kubelet[2549]: E1101 00:30:12.558223 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.561234 kubelet[2549]: E1101 00:30:12.558606 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.561234 kubelet[2549]: W1101 00:30:12.558681 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.561550 kubelet[2549]: E1101 00:30:12.558701 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:12.561550 kubelet[2549]: E1101 00:30:12.559522 2549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:30:12.561550 kubelet[2549]: W1101 00:30:12.559541 2549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:30:12.561550 kubelet[2549]: E1101 00:30:12.559561 2549 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:30:13.225242 containerd[1462]: time="2025-11-01T00:30:13.223828041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:13.225242 containerd[1462]: time="2025-11-01T00:30:13.224818814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:30:13.226387 containerd[1462]: time="2025-11-01T00:30:13.226342645Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:13.229197 containerd[1462]: time="2025-11-01T00:30:13.229155795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:13.230145 containerd[1462]: time="2025-11-01T00:30:13.230097227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.29813435s" Nov 1 00:30:13.230234 containerd[1462]: time="2025-11-01T00:30:13.230150437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:30:13.233718 containerd[1462]: time="2025-11-01T00:30:13.233503984Z" level=info msg="CreateContainer within sandbox \"80f8032a30ecfa6ecee8b8e456f8a3d92cc329f463c0e61102bce254903736a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:30:13.253677 containerd[1462]: time="2025-11-01T00:30:13.253636269Z" level=info msg="CreateContainer within sandbox \"80f8032a30ecfa6ecee8b8e456f8a3d92cc329f463c0e61102bce254903736a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bfc1d8fb7e776c55b14706405f105238bfed4b108fc13b1ce4b50971a9534470\"" Nov 1 00:30:13.254857 containerd[1462]: time="2025-11-01T00:30:13.254809859Z" level=info msg="StartContainer for \"bfc1d8fb7e776c55b14706405f105238bfed4b108fc13b1ce4b50971a9534470\"" Nov 1 00:30:13.342215 systemd[1]: Started cri-containerd-bfc1d8fb7e776c55b14706405f105238bfed4b108fc13b1ce4b50971a9534470.scope - libcontainer container bfc1d8fb7e776c55b14706405f105238bfed4b108fc13b1ce4b50971a9534470. Nov 1 00:30:13.422908 kubelet[2549]: I1101 00:30:13.422278 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:30:13.448795 containerd[1462]: time="2025-11-01T00:30:13.448733705Z" level=info msg="StartContainer for \"bfc1d8fb7e776c55b14706405f105238bfed4b108fc13b1ce4b50971a9534470\" returns successfully" Nov 1 00:30:13.466711 systemd[1]: cri-containerd-bfc1d8fb7e776c55b14706405f105238bfed4b108fc13b1ce4b50971a9534470.scope: Deactivated successfully. Nov 1 00:30:13.506970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfc1d8fb7e776c55b14706405f105238bfed4b108fc13b1ce4b50971a9534470-rootfs.mount: Deactivated successfully. Nov 1 00:30:14.185094 containerd[1462]: time="2025-11-01T00:30:14.184956181Z" level=info msg="shim disconnected" id=bfc1d8fb7e776c55b14706405f105238bfed4b108fc13b1ce4b50971a9534470 namespace=k8s.io Nov 1 00:30:14.185094 containerd[1462]: time="2025-11-01T00:30:14.185058608Z" level=warning msg="cleaning up after shim disconnected" id=bfc1d8fb7e776c55b14706405f105238bfed4b108fc13b1ce4b50971a9534470 namespace=k8s.io Nov 1 00:30:14.185094 containerd[1462]: time="2025-11-01T00:30:14.185075729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:30:14.304053 kubelet[2549]: E1101 00:30:14.302719 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:14.427873 containerd[1462]: time="2025-11-01T00:30:14.427823054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:30:16.303848 kubelet[2549]: E1101 00:30:16.302234 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:18.302321 kubelet[2549]: E1101 00:30:18.302200 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:18.313410 containerd[1462]: time="2025-11-01T00:30:18.313345684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:18.314755 containerd[1462]: time="2025-11-01T00:30:18.314603839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:30:18.316045 containerd[1462]: time="2025-11-01T00:30:18.315764420Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:18.320197 containerd[1462]: time="2025-11-01T00:30:18.320098080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:18.321387 containerd[1462]: time="2025-11-01T00:30:18.321332427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.892686891s" Nov 1 00:30:18.321387 containerd[1462]: time="2025-11-01T00:30:18.321373123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:30:18.324473 containerd[1462]: time="2025-11-01T00:30:18.324429051Z" level=info msg="CreateContainer within sandbox \"80f8032a30ecfa6ecee8b8e456f8a3d92cc329f463c0e61102bce254903736a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:30:18.342957 containerd[1462]: time="2025-11-01T00:30:18.342823835Z" level=info msg="CreateContainer within sandbox \"80f8032a30ecfa6ecee8b8e456f8a3d92cc329f463c0e61102bce254903736a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a62d2caf14b61b6dd86f19182ec77a4d89582e526bf463690c2490746e9cecd1\"" Nov 1 00:30:18.345761 containerd[1462]: time="2025-11-01T00:30:18.345720616Z" level=info msg="StartContainer for \"a62d2caf14b61b6dd86f19182ec77a4d89582e526bf463690c2490746e9cecd1\"" Nov 1 00:30:18.395272 systemd[1]: Started cri-containerd-a62d2caf14b61b6dd86f19182ec77a4d89582e526bf463690c2490746e9cecd1.scope - libcontainer container a62d2caf14b61b6dd86f19182ec77a4d89582e526bf463690c2490746e9cecd1. Nov 1 00:30:18.440067 containerd[1462]: time="2025-11-01T00:30:18.439705499Z" level=info msg="StartContainer for \"a62d2caf14b61b6dd86f19182ec77a4d89582e526bf463690c2490746e9cecd1\" returns successfully" Nov 1 00:30:19.405178 containerd[1462]: time="2025-11-01T00:30:19.405064120Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:30:19.409214 systemd[1]: cri-containerd-a62d2caf14b61b6dd86f19182ec77a4d89582e526bf463690c2490746e9cecd1.scope: Deactivated successfully. Nov 1 00:30:19.449857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a62d2caf14b61b6dd86f19182ec77a4d89582e526bf463690c2490746e9cecd1-rootfs.mount: Deactivated successfully. Nov 1 00:30:19.453213 kubelet[2549]: I1101 00:30:19.453175 2549 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:30:19.515813 kubelet[2549]: I1101 00:30:19.515613 2549 status_manager.go:890] "Failed to get status for pod" podUID="37d7183e-e34d-4dec-b261-c74c0840b2de" pod="kube-system/coredns-668d6bf9bc-pd8zf" err="pods \"coredns-668d6bf9bc-pd8zf\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object" Nov 1 00:30:19.517317 kubelet[2549]: W1101 00:30:19.517286 2549 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object Nov 1 00:30:19.518042 kubelet[2549]: E1101 00:30:19.517487 2549 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' and this object" logger="UnhandledError" Nov 1 00:30:19.525805 systemd[1]: Created slice kubepods-burstable-pod37d7183e_e34d_4dec_b261_c74c0840b2de.slice - libcontainer container kubepods-burstable-pod37d7183e_e34d_4dec_b261_c74c0840b2de.slice. Nov 1 00:30:19.552070 systemd[1]: Created slice kubepods-besteffort-podeb24b203_bba2_4a68_ac20_bbf747c87903.slice - libcontainer container kubepods-besteffort-podeb24b203_bba2_4a68_ac20_bbf747c87903.slice. Nov 1 00:30:19.568768 systemd[1]: Created slice kubepods-burstable-pod2e557abe_a350_4983_a5a3_ea11db3910b6.slice - libcontainer container kubepods-burstable-pod2e557abe_a350_4983_a5a3_ea11db3910b6.slice. Nov 1 00:30:19.588118 systemd[1]: Created slice kubepods-besteffort-pod73f63e7d_cd05_453e_9fac_681616f1563c.slice - libcontainer container kubepods-besteffort-pod73f63e7d_cd05_453e_9fac_681616f1563c.slice. Nov 1 00:30:19.600468 systemd[1]: Created slice kubepods-besteffort-pod95a1fab6_d611_401e_9fe4_c7918f3b5d89.slice - libcontainer container kubepods-besteffort-pod95a1fab6_d611_401e_9fe4_c7918f3b5d89.slice. Nov 1 00:30:19.681691 kubelet[2549]: I1101 00:30:19.602617 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9btkr\" (UniqueName: \"kubernetes.io/projected/f01ebb62-cbae-4771-a12a-33c798f125cd-kube-api-access-9btkr\") pod \"calico-apiserver-7bb458b5b7-htf5s\" (UID: \"f01ebb62-cbae-4771-a12a-33c798f125cd\") " pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" Nov 1 00:30:19.681691 kubelet[2549]: I1101 00:30:19.602667 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnjvp\" (UniqueName: \"kubernetes.io/projected/95a1fab6-d611-401e-9fe4-c7918f3b5d89-kube-api-access-tnjvp\") pod \"whisker-967b467f4-qpfxh\" (UID: \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\") " pod="calico-system/whisker-967b467f4-qpfxh" Nov 1 00:30:19.681691 kubelet[2549]: I1101 00:30:19.602697 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e557abe-a350-4983-a5a3-ea11db3910b6-config-volume\") pod \"coredns-668d6bf9bc-jbcpc\" (UID: \"2e557abe-a350-4983-a5a3-ea11db3910b6\") " pod="kube-system/coredns-668d6bf9bc-jbcpc" Nov 1 00:30:19.681691 kubelet[2549]: I1101 00:30:19.602727 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhd88\" (UniqueName: \"kubernetes.io/projected/ce0ad95a-ccba-4cd4-91a4-5a94be968da8-kube-api-access-jhd88\") pod \"calico-apiserver-7bb458b5b7-gxqtr\" (UID: \"ce0ad95a-ccba-4cd4-91a4-5a94be968da8\") " pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" Nov 1 00:30:19.681691 kubelet[2549]: I1101 00:30:19.602779 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb24b203-bba2-4a68-ac20-bbf747c87903-tigera-ca-bundle\") pod \"calico-kube-controllers-749d6dfb67-b9g5c\" (UID: \"eb24b203-bba2-4a68-ac20-bbf747c87903\") " pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" Nov 1 00:30:19.615786 systemd[1]: Created slice kubepods-besteffort-podce0ad95a_ccba_4cd4_91a4_5a94be968da8.slice - libcontainer container kubepods-besteffort-podce0ad95a_ccba_4cd4_91a4_5a94be968da8.slice. Nov 1 00:30:19.686849 kubelet[2549]: I1101 00:30:19.602846 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95a1fab6-d611-401e-9fe4-c7918f3b5d89-whisker-ca-bundle\") pod \"whisker-967b467f4-qpfxh\" (UID: \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\") " pod="calico-system/whisker-967b467f4-qpfxh" Nov 1 00:30:19.686849 kubelet[2549]: I1101 00:30:19.602877 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37d7183e-e34d-4dec-b261-c74c0840b2de-config-volume\") pod \"coredns-668d6bf9bc-pd8zf\" (UID: \"37d7183e-e34d-4dec-b261-c74c0840b2de\") " pod="kube-system/coredns-668d6bf9bc-pd8zf" Nov 1 00:30:19.686849 kubelet[2549]: I1101 00:30:19.602910 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f63e7d-cd05-453e-9fac-681616f1563c-goldmane-ca-bundle\") pod \"goldmane-666569f655-cjssg\" (UID: \"73f63e7d-cd05-453e-9fac-681616f1563c\") " pod="calico-system/goldmane-666569f655-cjssg" Nov 1 00:30:19.686849 kubelet[2549]: I1101 00:30:19.602939 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce0ad95a-ccba-4cd4-91a4-5a94be968da8-calico-apiserver-certs\") pod \"calico-apiserver-7bb458b5b7-gxqtr\" (UID: \"ce0ad95a-ccba-4cd4-91a4-5a94be968da8\") " pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" Nov 1 00:30:19.686849 kubelet[2549]: I1101 00:30:19.602978 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f01ebb62-cbae-4771-a12a-33c798f125cd-calico-apiserver-certs\") pod \"calico-apiserver-7bb458b5b7-htf5s\" (UID: \"f01ebb62-cbae-4771-a12a-33c798f125cd\") " pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" Nov 1 00:30:19.626546 systemd[1]: Created slice kubepods-besteffort-podf01ebb62_cbae_4771_a12a_33c798f125cd.slice - libcontainer container kubepods-besteffort-podf01ebb62_cbae_4771_a12a_33c798f125cd.slice. Nov 1 00:30:19.687606 kubelet[2549]: I1101 00:30:19.603055 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73f63e7d-cd05-453e-9fac-681616f1563c-config\") pod \"goldmane-666569f655-cjssg\" (UID: \"73f63e7d-cd05-453e-9fac-681616f1563c\") " pod="calico-system/goldmane-666569f655-cjssg" Nov 1 00:30:19.687606 kubelet[2549]: I1101 00:30:19.603089 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgl65\" (UniqueName: \"kubernetes.io/projected/2e557abe-a350-4983-a5a3-ea11db3910b6-kube-api-access-bgl65\") pod \"coredns-668d6bf9bc-jbcpc\" (UID: \"2e557abe-a350-4983-a5a3-ea11db3910b6\") " pod="kube-system/coredns-668d6bf9bc-jbcpc" Nov 1 00:30:19.687606 kubelet[2549]: I1101 00:30:19.603121 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/73f63e7d-cd05-453e-9fac-681616f1563c-goldmane-key-pair\") pod \"goldmane-666569f655-cjssg\" (UID: \"73f63e7d-cd05-453e-9fac-681616f1563c\") " pod="calico-system/goldmane-666569f655-cjssg" Nov 1 00:30:19.687606 kubelet[2549]: I1101 00:30:19.603150 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95a1fab6-d611-401e-9fe4-c7918f3b5d89-whisker-backend-key-pair\") pod \"whisker-967b467f4-qpfxh\" (UID: \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\") " pod="calico-system/whisker-967b467f4-qpfxh" Nov 1 00:30:19.687606 kubelet[2549]: I1101 00:30:19.603184 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw88s\" (UniqueName: \"kubernetes.io/projected/37d7183e-e34d-4dec-b261-c74c0840b2de-kube-api-access-tw88s\") pod \"coredns-668d6bf9bc-pd8zf\" (UID: \"37d7183e-e34d-4dec-b261-c74c0840b2de\") " pod="kube-system/coredns-668d6bf9bc-pd8zf" Nov 1 00:30:19.688707 kubelet[2549]: I1101 00:30:19.603217 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrqv\" (UniqueName: \"kubernetes.io/projected/eb24b203-bba2-4a68-ac20-bbf747c87903-kube-api-access-vrrqv\") pod \"calico-kube-controllers-749d6dfb67-b9g5c\" (UID: \"eb24b203-bba2-4a68-ac20-bbf747c87903\") " pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" Nov 1 00:30:19.688707 kubelet[2549]: I1101 00:30:19.603256 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p65gk\" (UniqueName: \"kubernetes.io/projected/73f63e7d-cd05-453e-9fac-681616f1563c-kube-api-access-p65gk\") pod \"goldmane-666569f655-cjssg\" (UID: \"73f63e7d-cd05-453e-9fac-681616f1563c\") " pod="calico-system/goldmane-666569f655-cjssg" Nov 1 00:30:19.859195 containerd[1462]: time="2025-11-01T00:30:19.859130143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749d6dfb67-b9g5c,Uid:eb24b203-bba2-4a68-ac20-bbf747c87903,Namespace:calico-system,Attempt:0,}" Nov 1 00:30:19.983324 containerd[1462]: time="2025-11-01T00:30:19.982985603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cjssg,Uid:73f63e7d-cd05-453e-9fac-681616f1563c,Namespace:calico-system,Attempt:0,}" Nov 1 00:30:19.983324 containerd[1462]: time="2025-11-01T00:30:19.982985706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb458b5b7-htf5s,Uid:f01ebb62-cbae-4771-a12a-33c798f125cd,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:30:19.988473 containerd[1462]: time="2025-11-01T00:30:19.988139555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb458b5b7-gxqtr,Uid:ce0ad95a-ccba-4cd4-91a4-5a94be968da8,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:30:19.989293 containerd[1462]: time="2025-11-01T00:30:19.989148775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-967b467f4-qpfxh,Uid:95a1fab6-d611-401e-9fe4-c7918f3b5d89,Namespace:calico-system,Attempt:0,}" Nov 1 00:30:19.999579 containerd[1462]: time="2025-11-01T00:30:19.999516095Z" level=info msg="shim disconnected" id=a62d2caf14b61b6dd86f19182ec77a4d89582e526bf463690c2490746e9cecd1 namespace=k8s.io Nov 1 00:30:19.999579 containerd[1462]: time="2025-11-01T00:30:19.999577883Z" level=warning msg="cleaning up after shim disconnected" id=a62d2caf14b61b6dd86f19182ec77a4d89582e526bf463690c2490746e9cecd1 namespace=k8s.io Nov 1 00:30:20.000496 containerd[1462]: time="2025-11-01T00:30:19.999591034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:30:20.246546 containerd[1462]: time="2025-11-01T00:30:20.245671170Z" level=error msg="Failed to destroy network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.249790 containerd[1462]: time="2025-11-01T00:30:20.249714348Z" level=error msg="encountered an error cleaning up failed sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.249917 containerd[1462]: time="2025-11-01T00:30:20.249823608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb458b5b7-htf5s,Uid:f01ebb62-cbae-4771-a12a-33c798f125cd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.250215 kubelet[2549]: E1101 00:30:20.250168 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.250325 kubelet[2549]: E1101 00:30:20.250272 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" Nov 1 00:30:20.250325 kubelet[2549]: E1101 00:30:20.250310 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" Nov 1 00:30:20.250447 kubelet[2549]: E1101 00:30:20.250378 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb458b5b7-htf5s_calico-apiserver(f01ebb62-cbae-4771-a12a-33c798f125cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb458b5b7-htf5s_calico-apiserver(f01ebb62-cbae-4771-a12a-33c798f125cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:30:20.291072 containerd[1462]: time="2025-11-01T00:30:20.290937656Z" level=error msg="Failed to destroy network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.291490 containerd[1462]: time="2025-11-01T00:30:20.291435428Z" level=error msg="encountered an error cleaning up failed sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.291613 containerd[1462]: time="2025-11-01T00:30:20.291527674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749d6dfb67-b9g5c,Uid:eb24b203-bba2-4a68-ac20-bbf747c87903,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.292389 kubelet[2549]: E1101 00:30:20.291827 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.292389 kubelet[2549]: E1101 00:30:20.291914 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" Nov 1 00:30:20.292389 kubelet[2549]: E1101 00:30:20.291951 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" Nov 1 00:30:20.293340 kubelet[2549]: E1101 00:30:20.292041 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-749d6dfb67-b9g5c_calico-system(eb24b203-bba2-4a68-ac20-bbf747c87903)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-749d6dfb67-b9g5c_calico-system(eb24b203-bba2-4a68-ac20-bbf747c87903)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:30:20.298627 containerd[1462]: time="2025-11-01T00:30:20.298580711Z" level=error msg="Failed to destroy network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.299203 containerd[1462]: time="2025-11-01T00:30:20.299057457Z" level=error msg="encountered an error cleaning up failed sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.299203 containerd[1462]: time="2025-11-01T00:30:20.299150003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cjssg,Uid:73f63e7d-cd05-453e-9fac-681616f1563c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.299904 kubelet[2549]: E1101 00:30:20.299428 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.299904 kubelet[2549]: E1101 00:30:20.299499 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cjssg" Nov 1 00:30:20.299904 kubelet[2549]: E1101 00:30:20.299533 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cjssg" Nov 1 00:30:20.300145 kubelet[2549]: E1101 00:30:20.299603 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-cjssg_calico-system(73f63e7d-cd05-453e-9fac-681616f1563c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-cjssg_calico-system(73f63e7d-cd05-453e-9fac-681616f1563c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:30:20.308401 containerd[1462]: time="2025-11-01T00:30:20.307962594Z" level=error msg="Failed to destroy network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.309552 containerd[1462]: time="2025-11-01T00:30:20.308613140Z" level=error msg="encountered an error cleaning up failed sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.309552 containerd[1462]: time="2025-11-01T00:30:20.309503192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb458b5b7-gxqtr,Uid:ce0ad95a-ccba-4cd4-91a4-5a94be968da8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.310925 kubelet[2549]: E1101 00:30:20.310714 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.310925 kubelet[2549]: E1101 00:30:20.310769 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" Nov 1 00:30:20.310925 kubelet[2549]: E1101 00:30:20.310804 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" Nov 1 00:30:20.311931 kubelet[2549]: E1101 00:30:20.310859 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb458b5b7-gxqtr_calico-apiserver(ce0ad95a-ccba-4cd4-91a4-5a94be968da8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb458b5b7-gxqtr_calico-apiserver(ce0ad95a-ccba-4cd4-91a4-5a94be968da8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:30:20.320532 systemd[1]: Created slice kubepods-besteffort-podf2c53676_0b50_4c2c_9234_572240cab45e.slice - libcontainer container kubepods-besteffort-podf2c53676_0b50_4c2c_9234_572240cab45e.slice. Nov 1 00:30:20.321092 containerd[1462]: time="2025-11-01T00:30:20.321049232Z" level=error msg="Failed to destroy network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.321501 containerd[1462]: time="2025-11-01T00:30:20.321461756Z" level=error msg="encountered an error cleaning up failed sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.321826 containerd[1462]: time="2025-11-01T00:30:20.321529561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-967b467f4-qpfxh,Uid:95a1fab6-d611-401e-9fe4-c7918f3b5d89,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.323180 kubelet[2549]: E1101 00:30:20.323136 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.323286 kubelet[2549]: E1101 00:30:20.323186 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-967b467f4-qpfxh" Nov 1 00:30:20.323286 kubelet[2549]: E1101 00:30:20.323220 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-967b467f4-qpfxh" Nov 1 00:30:20.323286 kubelet[2549]: E1101 00:30:20.323268 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-967b467f4-qpfxh_calico-system(95a1fab6-d611-401e-9fe4-c7918f3b5d89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-967b467f4-qpfxh_calico-system(95a1fab6-d611-401e-9fe4-c7918f3b5d89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-967b467f4-qpfxh" podUID="95a1fab6-d611-401e-9fe4-c7918f3b5d89" Nov 1 00:30:20.326804 containerd[1462]: time="2025-11-01T00:30:20.326751833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9v2bt,Uid:f2c53676-0b50-4c2c-9234-572240cab45e,Namespace:calico-system,Attempt:0,}" Nov 1 00:30:20.401812 containerd[1462]: time="2025-11-01T00:30:20.401740548Z" level=error msg="Failed to destroy network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.402251 containerd[1462]: time="2025-11-01T00:30:20.402202668Z" level=error msg="encountered an error cleaning up failed sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.402452 containerd[1462]: time="2025-11-01T00:30:20.402292591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9v2bt,Uid:f2c53676-0b50-4c2c-9234-572240cab45e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.402643 kubelet[2549]: E1101 00:30:20.402569 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.402723 kubelet[2549]: E1101 00:30:20.402648 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9v2bt" Nov 1 00:30:20.402723 kubelet[2549]: E1101 00:30:20.402683 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9v2bt" Nov 1 00:30:20.402904 kubelet[2549]: E1101 00:30:20.402740 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9v2bt_calico-system(f2c53676-0b50-4c2c-9234-572240cab45e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9v2bt_calico-system(f2c53676-0b50-4c2c-9234-572240cab45e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:20.456838 kubelet[2549]: I1101 00:30:20.456803 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:20.459443 containerd[1462]: time="2025-11-01T00:30:20.458753607Z" level=info msg="StopPodSandbox for \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\"" Nov 1 00:30:20.459443 containerd[1462]: time="2025-11-01T00:30:20.459075481Z" level=info msg="Ensure that sandbox 5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b in task-service has been cleanup successfully" Nov 1 00:30:20.461038 kubelet[2549]: I1101 00:30:20.460525 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:20.461386 containerd[1462]: time="2025-11-01T00:30:20.461350154Z" level=info msg="StopPodSandbox for \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\"" Nov 1 00:30:20.461621 containerd[1462]: time="2025-11-01T00:30:20.461577543Z" level=info msg="Ensure that sandbox dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0 in task-service has been cleanup successfully" Nov 1 00:30:20.480515 kubelet[2549]: I1101 00:30:20.478238 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:20.481191 containerd[1462]: time="2025-11-01T00:30:20.481147462Z" level=info msg="StopPodSandbox for \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\"" Nov 1 00:30:20.485561 containerd[1462]: time="2025-11-01T00:30:20.484416316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:30:20.488731 containerd[1462]: time="2025-11-01T00:30:20.485951618Z" level=info msg="Ensure that sandbox f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d in task-service has been cleanup successfully" Nov 1 00:30:20.503330 kubelet[2549]: I1101 00:30:20.503212 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:20.507436 containerd[1462]: time="2025-11-01T00:30:20.507398453Z" level=info msg="StopPodSandbox for \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\"" Nov 1 00:30:20.507884 containerd[1462]: time="2025-11-01T00:30:20.507836089Z" level=info msg="Ensure that sandbox 9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad in task-service has been cleanup successfully" Nov 1 00:30:20.528542 kubelet[2549]: I1101 00:30:20.528515 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:20.529617 containerd[1462]: time="2025-11-01T00:30:20.529573729Z" level=info msg="StopPodSandbox for \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\"" Nov 1 00:30:20.532242 containerd[1462]: time="2025-11-01T00:30:20.530658771Z" level=info msg="Ensure that sandbox 78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd in task-service has been cleanup successfully" Nov 1 00:30:20.545407 kubelet[2549]: I1101 00:30:20.545380 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:20.547853 containerd[1462]: time="2025-11-01T00:30:20.547814434Z" level=info msg="StopPodSandbox for \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\"" Nov 1 00:30:20.549154 containerd[1462]: time="2025-11-01T00:30:20.549067988Z" level=info msg="Ensure that sandbox d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65 in task-service has been cleanup successfully" Nov 1 00:30:20.611385 containerd[1462]: time="2025-11-01T00:30:20.611185908Z" level=error msg="StopPodSandbox for \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\" failed" error="failed to destroy network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.611555 kubelet[2549]: E1101 00:30:20.611461 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:20.611635 kubelet[2549]: E1101 00:30:20.611579 2549 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b"} Nov 1 00:30:20.611695 kubelet[2549]: E1101 00:30:20.611660 2549 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:30:20.612074 kubelet[2549]: E1101 00:30:20.611696 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-967b467f4-qpfxh" podUID="95a1fab6-d611-401e-9fe4-c7918f3b5d89" Nov 1 00:30:20.635234 containerd[1462]: time="2025-11-01T00:30:20.635146861Z" level=error msg="StopPodSandbox for \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\" failed" error="failed to destroy network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.642803 kubelet[2549]: E1101 00:30:20.642735 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:20.642967 kubelet[2549]: E1101 00:30:20.642812 2549 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d"} Nov 1 00:30:20.642967 kubelet[2549]: E1101 00:30:20.642859 2549 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2c53676-0b50-4c2c-9234-572240cab45e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:30:20.642967 kubelet[2549]: E1101 00:30:20.642892 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2c53676-0b50-4c2c-9234-572240cab45e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:20.655039 containerd[1462]: time="2025-11-01T00:30:20.653799108Z" level=error msg="StopPodSandbox for \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\" failed" error="failed to destroy network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.655177 kubelet[2549]: E1101 00:30:20.654098 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:20.655177 kubelet[2549]: E1101 00:30:20.654156 2549 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0"} Nov 1 00:30:20.655177 kubelet[2549]: E1101 00:30:20.654201 2549 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce0ad95a-ccba-4cd4-91a4-5a94be968da8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:30:20.655177 kubelet[2549]: E1101 00:30:20.654235 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce0ad95a-ccba-4cd4-91a4-5a94be968da8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:30:20.675737 containerd[1462]: time="2025-11-01T00:30:20.675658551Z" level=error msg="StopPodSandbox for \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\" failed" error="failed to destroy network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.675982 kubelet[2549]: E1101 00:30:20.675930 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:20.676116 kubelet[2549]: E1101 00:30:20.676002 2549 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad"} Nov 1 00:30:20.676178 kubelet[2549]: E1101 00:30:20.676111 2549 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f01ebb62-cbae-4771-a12a-33c798f125cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:30:20.676178 kubelet[2549]: E1101 00:30:20.676146 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f01ebb62-cbae-4771-a12a-33c798f125cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:30:20.677233 containerd[1462]: time="2025-11-01T00:30:20.677184924Z" level=error msg="StopPodSandbox for \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\" failed" error="failed to destroy network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.677417 kubelet[2549]: E1101 00:30:20.677376 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:20.677506 kubelet[2549]: E1101 00:30:20.677430 2549 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd"} Nov 1 00:30:20.677506 kubelet[2549]: E1101 00:30:20.677474 2549 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb24b203-bba2-4a68-ac20-bbf747c87903\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:30:20.677647 kubelet[2549]: E1101 00:30:20.677508 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb24b203-bba2-4a68-ac20-bbf747c87903\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:30:20.680882 containerd[1462]: time="2025-11-01T00:30:20.680831746Z" level=error msg="StopPodSandbox for \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\" failed" error="failed to destroy network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:20.681162 kubelet[2549]: E1101 00:30:20.681122 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:20.681263 kubelet[2549]: E1101 00:30:20.681170 2549 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65"} Nov 1 00:30:20.681263 kubelet[2549]: E1101 00:30:20.681210 2549 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73f63e7d-cd05-453e-9fac-681616f1563c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:30:20.681263 kubelet[2549]: E1101 00:30:20.681248 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73f63e7d-cd05-453e-9fac-681616f1563c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:30:20.709342 kubelet[2549]: E1101 00:30:20.709275 2549 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:20.709492 kubelet[2549]: E1101 00:30:20.709376 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e557abe-a350-4983-a5a3-ea11db3910b6-config-volume podName:2e557abe-a350-4983-a5a3-ea11db3910b6 nodeName:}" failed. No retries permitted until 2025-11-01 00:30:21.209352267 +0000 UTC m=+39.075768566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2e557abe-a350-4983-a5a3-ea11db3910b6-config-volume") pod "coredns-668d6bf9bc-jbcpc" (UID: "2e557abe-a350-4983-a5a3-ea11db3910b6") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:20.709634 kubelet[2549]: E1101 00:30:20.709289 2549 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:20.709776 kubelet[2549]: E1101 00:30:20.709682 2549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37d7183e-e34d-4dec-b261-c74c0840b2de-config-volume podName:37d7183e-e34d-4dec-b261-c74c0840b2de nodeName:}" failed. No retries permitted until 2025-11-01 00:30:21.209661526 +0000 UTC m=+39.076077839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/37d7183e-e34d-4dec-b261-c74c0840b2de-config-volume") pod "coredns-668d6bf9bc-pd8zf" (UID: "37d7183e-e34d-4dec-b261-c74c0840b2de") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:30:21.345059 containerd[1462]: time="2025-11-01T00:30:21.343640945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pd8zf,Uid:37d7183e-e34d-4dec-b261-c74c0840b2de,Namespace:kube-system,Attempt:0,}" Nov 1 00:30:21.378643 containerd[1462]: time="2025-11-01T00:30:21.378148237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jbcpc,Uid:2e557abe-a350-4983-a5a3-ea11db3910b6,Namespace:kube-system,Attempt:0,}" Nov 1 00:30:21.470382 containerd[1462]: time="2025-11-01T00:30:21.470304991Z" level=error msg="Failed to destroy network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.472308 containerd[1462]: time="2025-11-01T00:30:21.470786051Z" level=error msg="encountered an error cleaning up failed sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.472308 containerd[1462]: time="2025-11-01T00:30:21.470861923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pd8zf,Uid:37d7183e-e34d-4dec-b261-c74c0840b2de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.474125 kubelet[2549]: E1101 00:30:21.472825 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.474125 kubelet[2549]: E1101 00:30:21.472907 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pd8zf" Nov 1 00:30:21.474125 kubelet[2549]: E1101 00:30:21.472940 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pd8zf" Nov 1 00:30:21.475105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8-shm.mount: Deactivated successfully. Nov 1 00:30:21.478752 kubelet[2549]: E1101 00:30:21.474060 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pd8zf_kube-system(37d7183e-e34d-4dec-b261-c74c0840b2de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pd8zf_kube-system(37d7183e-e34d-4dec-b261-c74c0840b2de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pd8zf" podUID="37d7183e-e34d-4dec-b261-c74c0840b2de" Nov 1 00:30:21.491776 containerd[1462]: time="2025-11-01T00:30:21.491720025Z" level=error msg="Failed to destroy network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.492241 containerd[1462]: time="2025-11-01T00:30:21.492194330Z" level=error msg="encountered an error cleaning up failed sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.492473 containerd[1462]: time="2025-11-01T00:30:21.492271848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jbcpc,Uid:2e557abe-a350-4983-a5a3-ea11db3910b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.494237 kubelet[2549]: E1101 00:30:21.494191 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.496706 kubelet[2549]: E1101 00:30:21.494411 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jbcpc" Nov 1 00:30:21.496706 kubelet[2549]: E1101 00:30:21.494472 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jbcpc" Nov 1 00:30:21.496706 kubelet[2549]: E1101 00:30:21.494553 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jbcpc_kube-system(2e557abe-a350-4983-a5a3-ea11db3910b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jbcpc_kube-system(2e557abe-a350-4983-a5a3-ea11db3910b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jbcpc" podUID="2e557abe-a350-4983-a5a3-ea11db3910b6" Nov 1 00:30:21.496893 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050-shm.mount: Deactivated successfully. Nov 1 00:30:21.550403 kubelet[2549]: I1101 00:30:21.550355 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:21.551831 containerd[1462]: time="2025-11-01T00:30:21.551262572Z" level=info msg="StopPodSandbox for \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\"" Nov 1 00:30:21.551831 containerd[1462]: time="2025-11-01T00:30:21.551598335Z" level=info msg="Ensure that sandbox 5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8 in task-service has been cleanup successfully" Nov 1 00:30:21.556142 kubelet[2549]: I1101 00:30:21.555867 2549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:21.557720 containerd[1462]: time="2025-11-01T00:30:21.557030159Z" level=info msg="StopPodSandbox for \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\"" Nov 1 00:30:21.557720 containerd[1462]: time="2025-11-01T00:30:21.557288413Z" level=info msg="Ensure that sandbox c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050 in task-service has been cleanup successfully" Nov 1 00:30:21.614535 containerd[1462]: time="2025-11-01T00:30:21.614300397Z" level=error msg="StopPodSandbox for \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\" failed" error="failed to destroy network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.615130 kubelet[2549]: E1101 00:30:21.614634 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:21.615130 kubelet[2549]: E1101 00:30:21.614697 2549 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050"} Nov 1 00:30:21.615130 kubelet[2549]: E1101 00:30:21.614750 2549 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e557abe-a350-4983-a5a3-ea11db3910b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:30:21.615130 kubelet[2549]: E1101 00:30:21.614801 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e557abe-a350-4983-a5a3-ea11db3910b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jbcpc" podUID="2e557abe-a350-4983-a5a3-ea11db3910b6" Nov 1 00:30:21.634196 containerd[1462]: time="2025-11-01T00:30:21.634136449Z" level=error msg="StopPodSandbox for \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\" failed" error="failed to destroy network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:30:21.634644 kubelet[2549]: E1101 00:30:21.634455 2549 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:21.634644 kubelet[2549]: E1101 00:30:21.634518 2549 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8"} Nov 1 00:30:21.634644 kubelet[2549]: E1101 00:30:21.634569 2549 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37d7183e-e34d-4dec-b261-c74c0840b2de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:30:21.634644 kubelet[2549]: E1101 00:30:21.634603 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37d7183e-e34d-4dec-b261-c74c0840b2de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pd8zf" podUID="37d7183e-e34d-4dec-b261-c74c0840b2de" Nov 1 00:30:27.940779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311841273.mount: Deactivated successfully. Nov 1 00:30:27.972773 containerd[1462]: time="2025-11-01T00:30:27.972702141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:27.974518 containerd[1462]: time="2025-11-01T00:30:27.974277764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:30:27.976059 containerd[1462]: time="2025-11-01T00:30:27.975762612Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:27.979864 containerd[1462]: time="2025-11-01T00:30:27.979815640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:27.980645 containerd[1462]: time="2025-11-01T00:30:27.980594854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.492831532s" Nov 1 00:30:27.980758 containerd[1462]: time="2025-11-01T00:30:27.980650362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:30:28.005244 containerd[1462]: time="2025-11-01T00:30:28.005189230Z" level=info msg="CreateContainer within sandbox \"80f8032a30ecfa6ecee8b8e456f8a3d92cc329f463c0e61102bce254903736a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:30:28.026856 containerd[1462]: time="2025-11-01T00:30:28.026797836Z" level=info msg="CreateContainer within sandbox \"80f8032a30ecfa6ecee8b8e456f8a3d92cc329f463c0e61102bce254903736a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"203b2452f3738a3ba71741802b37dccb0e3d4f018f1673972edaaafc102eeb4c\"" Nov 1 00:30:28.028168 containerd[1462]: time="2025-11-01T00:30:28.027504083Z" level=info msg="StartContainer for \"203b2452f3738a3ba71741802b37dccb0e3d4f018f1673972edaaafc102eeb4c\"" Nov 1 00:30:28.067228 systemd[1]: Started cri-containerd-203b2452f3738a3ba71741802b37dccb0e3d4f018f1673972edaaafc102eeb4c.scope - libcontainer container 203b2452f3738a3ba71741802b37dccb0e3d4f018f1673972edaaafc102eeb4c. Nov 1 00:30:28.116048 containerd[1462]: time="2025-11-01T00:30:28.115981104Z" level=info msg="StartContainer for \"203b2452f3738a3ba71741802b37dccb0e3d4f018f1673972edaaafc102eeb4c\" returns successfully" Nov 1 00:30:28.240298 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:30:28.240446 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:30:28.379296 containerd[1462]: time="2025-11-01T00:30:28.379245422Z" level=info msg="StopPodSandbox for \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\"" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.477 [INFO][3789] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.477 [INFO][3789] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" iface="eth0" netns="/var/run/netns/cni-782dd359-a51b-5c95-a176-7ae58560126f" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.478 [INFO][3789] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" iface="eth0" netns="/var/run/netns/cni-782dd359-a51b-5c95-a176-7ae58560126f" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.478 [INFO][3789] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" iface="eth0" netns="/var/run/netns/cni-782dd359-a51b-5c95-a176-7ae58560126f" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.478 [INFO][3789] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.479 [INFO][3789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.531 [INFO][3797] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" HandleID="k8s-pod-network.5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.531 [INFO][3797] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.532 [INFO][3797] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.542 [WARNING][3797] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" HandleID="k8s-pod-network.5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.542 [INFO][3797] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" HandleID="k8s-pod-network.5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.544 [INFO][3797] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:28.553284 containerd[1462]: 2025-11-01 00:30:28.548 [INFO][3789] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:28.554092 containerd[1462]: time="2025-11-01T00:30:28.553481149Z" level=info msg="TearDown network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\" successfully" Nov 1 00:30:28.554092 containerd[1462]: time="2025-11-01T00:30:28.553540335Z" level=info msg="StopPodSandbox for \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\" returns successfully" Nov 1 00:30:28.608570 kubelet[2549]: I1101 00:30:28.608493 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d7jlj" podStartSLOduration=3.455736294 podStartE2EDuration="21.608467969s" podCreationTimestamp="2025-11-01 00:30:07 +0000 UTC" firstStartedPulling="2025-11-01 00:30:09.82912139 +0000 UTC m=+27.695537698" lastFinishedPulling="2025-11-01 00:30:27.981853063 +0000 UTC m=+45.848269373" observedRunningTime="2025-11-01 00:30:28.606844349 +0000 UTC m=+46.473260669" watchObservedRunningTime="2025-11-01 00:30:28.608467969 +0000 UTC m=+46.474884289" Nov 1 00:30:28.673722 kubelet[2549]: I1101 00:30:28.673584 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95a1fab6-d611-401e-9fe4-c7918f3b5d89-whisker-backend-key-pair\") pod \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\" (UID: \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\") " Nov 1 00:30:28.673722 kubelet[2549]: I1101 00:30:28.673648 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnjvp\" (UniqueName: \"kubernetes.io/projected/95a1fab6-d611-401e-9fe4-c7918f3b5d89-kube-api-access-tnjvp\") pod \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\" (UID: \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\") " Nov 1 00:30:28.673722 kubelet[2549]: I1101 00:30:28.673680 2549 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95a1fab6-d611-401e-9fe4-c7918f3b5d89-whisker-ca-bundle\") pod \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\" (UID: \"95a1fab6-d611-401e-9fe4-c7918f3b5d89\") " Nov 1 00:30:28.679380 kubelet[2549]: I1101 00:30:28.676341 2549 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a1fab6-d611-401e-9fe4-c7918f3b5d89-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "95a1fab6-d611-401e-9fe4-c7918f3b5d89" (UID: "95a1fab6-d611-401e-9fe4-c7918f3b5d89"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:30:28.682900 kubelet[2549]: I1101 00:30:28.682862 2549 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a1fab6-d611-401e-9fe4-c7918f3b5d89-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "95a1fab6-d611-401e-9fe4-c7918f3b5d89" (UID: "95a1fab6-d611-401e-9fe4-c7918f3b5d89"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:30:28.685194 kubelet[2549]: I1101 00:30:28.685159 2549 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a1fab6-d611-401e-9fe4-c7918f3b5d89-kube-api-access-tnjvp" (OuterVolumeSpecName: "kube-api-access-tnjvp") pod "95a1fab6-d611-401e-9fe4-c7918f3b5d89" (UID: "95a1fab6-d611-401e-9fe4-c7918f3b5d89"). InnerVolumeSpecName "kube-api-access-tnjvp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:30:28.774754 kubelet[2549]: I1101 00:30:28.774703 2549 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95a1fab6-d611-401e-9fe4-c7918f3b5d89-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" DevicePath \"\"" Nov 1 00:30:28.774754 kubelet[2549]: I1101 00:30:28.774750 2549 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tnjvp\" (UniqueName: \"kubernetes.io/projected/95a1fab6-d611-401e-9fe4-c7918f3b5d89-kube-api-access-tnjvp\") on node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" DevicePath \"\"" Nov 1 00:30:28.774754 kubelet[2549]: I1101 00:30:28.774768 2549 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95a1fab6-d611-401e-9fe4-c7918f3b5d89-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84\" DevicePath \"\"" Nov 1 00:30:28.886957 systemd[1]: Removed slice kubepods-besteffort-pod95a1fab6_d611_401e_9fe4_c7918f3b5d89.slice - libcontainer container kubepods-besteffort-pod95a1fab6_d611_401e_9fe4_c7918f3b5d89.slice. Nov 1 00:30:28.941267 systemd[1]: run-netns-cni\x2d782dd359\x2da51b\x2d5c95\x2da176\x2d7ae58560126f.mount: Deactivated successfully. Nov 1 00:30:28.941451 systemd[1]: var-lib-kubelet-pods-95a1fab6\x2dd611\x2d401e\x2d9fe4\x2dc7918f3b5d89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtnjvp.mount: Deactivated successfully. Nov 1 00:30:28.941569 systemd[1]: var-lib-kubelet-pods-95a1fab6\x2dd611\x2d401e\x2d9fe4\x2dc7918f3b5d89-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:30:28.968246 systemd[1]: Created slice kubepods-besteffort-pod34a02912_f185_42ba_a75a_ca30896a4f61.slice - libcontainer container kubepods-besteffort-pod34a02912_f185_42ba_a75a_ca30896a4f61.slice. Nov 1 00:30:29.078232 kubelet[2549]: I1101 00:30:29.078164 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjccf\" (UniqueName: \"kubernetes.io/projected/34a02912-f185-42ba-a75a-ca30896a4f61-kube-api-access-tjccf\") pod \"whisker-768465fd8d-cxghm\" (UID: \"34a02912-f185-42ba-a75a-ca30896a4f61\") " pod="calico-system/whisker-768465fd8d-cxghm" Nov 1 00:30:29.078232 kubelet[2549]: I1101 00:30:29.078222 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a02912-f185-42ba-a75a-ca30896a4f61-whisker-ca-bundle\") pod \"whisker-768465fd8d-cxghm\" (UID: \"34a02912-f185-42ba-a75a-ca30896a4f61\") " pod="calico-system/whisker-768465fd8d-cxghm" Nov 1 00:30:29.078512 kubelet[2549]: I1101 00:30:29.078252 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34a02912-f185-42ba-a75a-ca30896a4f61-whisker-backend-key-pair\") pod \"whisker-768465fd8d-cxghm\" (UID: \"34a02912-f185-42ba-a75a-ca30896a4f61\") " pod="calico-system/whisker-768465fd8d-cxghm" Nov 1 00:30:29.273735 containerd[1462]: time="2025-11-01T00:30:29.273588566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768465fd8d-cxghm,Uid:34a02912-f185-42ba-a75a-ca30896a4f61,Namespace:calico-system,Attempt:0,}" Nov 1 00:30:29.430131 systemd-networkd[1357]: cali6c2f19fc9ff: Link UP Nov 1 00:30:29.431705 systemd-networkd[1357]: cali6c2f19fc9ff: Gained carrier Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.328 [INFO][3823] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.343 [INFO][3823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0 whisker-768465fd8d- calico-system 34a02912-f185-42ba-a75a-ca30896a4f61 887 0 2025-11-01 00:30:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:768465fd8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84 whisker-768465fd8d-cxghm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6c2f19fc9ff [] [] }} ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Namespace="calico-system" Pod="whisker-768465fd8d-cxghm" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.343 [INFO][3823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Namespace="calico-system" Pod="whisker-768465fd8d-cxghm" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.379 [INFO][3836] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" HandleID="k8s-pod-network.ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.379 [INFO][3836] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" HandleID="k8s-pod-network.ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", "pod":"whisker-768465fd8d-cxghm", "timestamp":"2025-11-01 00:30:29.379000193 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.379 [INFO][3836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.379 [INFO][3836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.379 [INFO][3836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.390 [INFO][3836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.394 [INFO][3836] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.399 [INFO][3836] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.401 [INFO][3836] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.403 [INFO][3836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.403 [INFO][3836] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.404 [INFO][3836] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158 Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.408 [INFO][3836] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.415 [INFO][3836] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.1/26] block=192.168.43.0/26 handle="k8s-pod-network.ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.415 [INFO][3836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.1/26] handle="k8s-pod-network.ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.415 [INFO][3836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:29.455819 containerd[1462]: 2025-11-01 00:30:29.415 [INFO][3836] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.1/26] IPv6=[] ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" HandleID="k8s-pod-network.ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" Nov 1 00:30:29.457394 containerd[1462]: 2025-11-01 00:30:29.417 [INFO][3823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Namespace="calico-system" Pod="whisker-768465fd8d-cxghm" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0", GenerateName:"whisker-768465fd8d-", Namespace:"calico-system", SelfLink:"", UID:"34a02912-f185-42ba-a75a-ca30896a4f61", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"768465fd8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"", Pod:"whisker-768465fd8d-cxghm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c2f19fc9ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:29.457394 containerd[1462]: 2025-11-01 00:30:29.418 [INFO][3823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.1/32] ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Namespace="calico-system" Pod="whisker-768465fd8d-cxghm" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" Nov 1 00:30:29.457394 containerd[1462]: 2025-11-01 00:30:29.418 [INFO][3823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c2f19fc9ff ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Namespace="calico-system" Pod="whisker-768465fd8d-cxghm" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" Nov 1 00:30:29.457394 containerd[1462]: 2025-11-01 00:30:29.432 [INFO][3823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Namespace="calico-system" Pod="whisker-768465fd8d-cxghm" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" Nov 1 00:30:29.457394 containerd[1462]: 2025-11-01 00:30:29.433 [INFO][3823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Namespace="calico-system" Pod="whisker-768465fd8d-cxghm" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0", GenerateName:"whisker-768465fd8d-", Namespace:"calico-system", SelfLink:"", UID:"34a02912-f185-42ba-a75a-ca30896a4f61", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"768465fd8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158", Pod:"whisker-768465fd8d-cxghm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c2f19fc9ff", MAC:"e2:d7:0f:5b:35:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:29.457394 containerd[1462]: 2025-11-01 00:30:29.452 [INFO][3823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158" Namespace="calico-system" Pod="whisker-768465fd8d-cxghm" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--768465fd8d--cxghm-eth0" Nov 1 00:30:29.479908 containerd[1462]: time="2025-11-01T00:30:29.479759215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:29.479908 containerd[1462]: time="2025-11-01T00:30:29.479844396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:29.479908 containerd[1462]: time="2025-11-01T00:30:29.479869881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:29.480225 containerd[1462]: time="2025-11-01T00:30:29.479977946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:29.508229 systemd[1]: Started cri-containerd-ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158.scope - libcontainer container ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158. Nov 1 00:30:29.568856 containerd[1462]: time="2025-11-01T00:30:29.568789856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768465fd8d-cxghm,Uid:34a02912-f185-42ba-a75a-ca30896a4f61,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad4f98f6998bdf7ab8e5a64bb970007771b21a6b3e48f780e7e4e692b84f8158\"" Nov 1 00:30:29.572382 containerd[1462]: time="2025-11-01T00:30:29.572342045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:30:29.768989 containerd[1462]: time="2025-11-01T00:30:29.768908779Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:29.770656 containerd[1462]: time="2025-11-01T00:30:29.770575973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:30:29.770915 containerd[1462]: time="2025-11-01T00:30:29.770709983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:30:29.771688 kubelet[2549]: E1101 00:30:29.771314 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:30:29.771688 kubelet[2549]: E1101 00:30:29.771394 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:30:29.772356 kubelet[2549]: E1101 00:30:29.771601 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9354361088054afe9becf34fc1077d69,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjccf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768465fd8d-cxghm_calico-system(34a02912-f185-42ba-a75a-ca30896a4f61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:29.775604 containerd[1462]: time="2025-11-01T00:30:29.775565805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:30:29.983553 containerd[1462]: time="2025-11-01T00:30:29.983214464Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:29.985697 containerd[1462]: time="2025-11-01T00:30:29.984621794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:30:29.985697 containerd[1462]: time="2025-11-01T00:30:29.984758044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:30:29.985858 kubelet[2549]: E1101 00:30:29.985088 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:30:29.985858 kubelet[2549]: E1101 00:30:29.985235 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:30:29.986034 kubelet[2549]: E1101 00:30:29.985491 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjccf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768465fd8d-cxghm_calico-system(34a02912-f185-42ba-a75a-ca30896a4f61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:29.987315 kubelet[2549]: E1101 00:30:29.987211 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768465fd8d-cxghm" podUID="34a02912-f185-42ba-a75a-ca30896a4f61" Nov 1 00:30:30.305476 kubelet[2549]: I1101 00:30:30.305431 2549 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a1fab6-d611-401e-9fe4-c7918f3b5d89" path="/var/lib/kubelet/pods/95a1fab6-d611-401e-9fe4-c7918f3b5d89/volumes" Nov 1 00:30:30.589684 kubelet[2549]: E1101 00:30:30.589510 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768465fd8d-cxghm" podUID="34a02912-f185-42ba-a75a-ca30896a4f61" Nov 1 00:30:31.271449 systemd-networkd[1357]: cali6c2f19fc9ff: Gained IPv6LL Nov 1 00:30:31.304189 containerd[1462]: time="2025-11-01T00:30:31.303214667Z" level=info msg="StopPodSandbox for \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\"" Nov 1 00:30:31.304189 containerd[1462]: time="2025-11-01T00:30:31.303782802Z" level=info msg="StopPodSandbox for \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\"" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.432 [INFO][4066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.432 [INFO][4066] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" iface="eth0" netns="/var/run/netns/cni-de0768fd-3d58-0f72-94de-3ce25ed9abbd" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.432 [INFO][4066] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" iface="eth0" netns="/var/run/netns/cni-de0768fd-3d58-0f72-94de-3ce25ed9abbd" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.433 [INFO][4066] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" iface="eth0" netns="/var/run/netns/cni-de0768fd-3d58-0f72-94de-3ce25ed9abbd" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.433 [INFO][4066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.433 [INFO][4066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.516 [INFO][4083] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" HandleID="k8s-pod-network.f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.519 [INFO][4083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.520 [INFO][4083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.530 [WARNING][4083] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" HandleID="k8s-pod-network.f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.530 [INFO][4083] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" HandleID="k8s-pod-network.f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.532 [INFO][4083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:31.538977 containerd[1462]: 2025-11-01 00:30:31.537 [INFO][4066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:31.540628 containerd[1462]: time="2025-11-01T00:30:31.539085462Z" level=info msg="TearDown network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\" successfully" Nov 1 00:30:31.540628 containerd[1462]: time="2025-11-01T00:30:31.539137819Z" level=info msg="StopPodSandbox for \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\" returns successfully" Nov 1 00:30:31.550290 containerd[1462]: time="2025-11-01T00:30:31.545542883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9v2bt,Uid:f2c53676-0b50-4c2c-9234-572240cab45e,Namespace:calico-system,Attempt:1,}" Nov 1 00:30:31.548563 systemd[1]: run-netns-cni\x2dde0768fd\x2d3d58\x2d0f72\x2d94de\x2d3ce25ed9abbd.mount: Deactivated successfully. Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.428 [INFO][4067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.430 [INFO][4067] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" iface="eth0" netns="/var/run/netns/cni-85551a73-3144-6ee9-278e-d1ea8575fae7" Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.431 [INFO][4067] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" iface="eth0" netns="/var/run/netns/cni-85551a73-3144-6ee9-278e-d1ea8575fae7" Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.431 [INFO][4067] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" iface="eth0" netns="/var/run/netns/cni-85551a73-3144-6ee9-278e-d1ea8575fae7" Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.432 [INFO][4067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.432 [INFO][4067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.522 [INFO][4081] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" HandleID="k8s-pod-network.78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.523 [INFO][4081] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.533 [INFO][4081] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.550 [WARNING][4081] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" HandleID="k8s-pod-network.78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.550 [INFO][4081] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" HandleID="k8s-pod-network.78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.555 [INFO][4081] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:31.561395 containerd[1462]: 2025-11-01 00:30:31.557 [INFO][4067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:31.561395 containerd[1462]: time="2025-11-01T00:30:31.561165470Z" level=info msg="TearDown network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\" successfully" Nov 1 00:30:31.561395 containerd[1462]: time="2025-11-01T00:30:31.561196546Z" level=info msg="StopPodSandbox for \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\" returns successfully" Nov 1 00:30:31.567447 containerd[1462]: time="2025-11-01T00:30:31.567053657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749d6dfb67-b9g5c,Uid:eb24b203-bba2-4a68-ac20-bbf747c87903,Namespace:calico-system,Attempt:1,}" Nov 1 00:30:31.569207 systemd[1]: run-netns-cni\x2d85551a73\x2d3144\x2d6ee9\x2d278e\x2dd1ea8575fae7.mount: Deactivated successfully. Nov 1 00:30:31.889312 systemd-networkd[1357]: cali72d0940ade0: Link UP Nov 1 00:30:31.896180 systemd-networkd[1357]: cali72d0940ade0: Gained carrier Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.674 [INFO][4106] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.699 [INFO][4106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0 calico-kube-controllers-749d6dfb67- calico-system eb24b203-bba2-4a68-ac20-bbf747c87903 915 0 2025-11-01 00:30:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:749d6dfb67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84 calico-kube-controllers-749d6dfb67-b9g5c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali72d0940ade0 [] [] }} ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Namespace="calico-system" Pod="calico-kube-controllers-749d6dfb67-b9g5c" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.700 [INFO][4106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Namespace="calico-system" Pod="calico-kube-controllers-749d6dfb67-b9g5c" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.790 [INFO][4121] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" HandleID="k8s-pod-network.c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.793 [INFO][4121] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" HandleID="k8s-pod-network.c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039de90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", "pod":"calico-kube-controllers-749d6dfb67-b9g5c", "timestamp":"2025-11-01 00:30:31.790149072 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.793 [INFO][4121] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.793 [INFO][4121] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.793 [INFO][4121] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.820 [INFO][4121] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.840 [INFO][4121] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.848 [INFO][4121] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.851 [INFO][4121] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.855 [INFO][4121] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.855 [INFO][4121] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.857 [INFO][4121] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.865 [INFO][4121] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.878 [INFO][4121] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.2/26] block=192.168.43.0/26 handle="k8s-pod-network.c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.878 [INFO][4121] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.2/26] handle="k8s-pod-network.c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.878 [INFO][4121] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:31.933110 containerd[1462]: 2025-11-01 00:30:31.878 [INFO][4121] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.2/26] IPv6=[] ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" HandleID="k8s-pod-network.c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.935398 containerd[1462]: 2025-11-01 00:30:31.881 [INFO][4106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Namespace="calico-system" Pod="calico-kube-controllers-749d6dfb67-b9g5c" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0", GenerateName:"calico-kube-controllers-749d6dfb67-", Namespace:"calico-system", SelfLink:"", UID:"eb24b203-bba2-4a68-ac20-bbf747c87903", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749d6dfb67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"", Pod:"calico-kube-controllers-749d6dfb67-b9g5c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72d0940ade0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:31.935398 containerd[1462]: 2025-11-01 00:30:31.882 [INFO][4106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.2/32] ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Namespace="calico-system" Pod="calico-kube-controllers-749d6dfb67-b9g5c" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.935398 containerd[1462]: 2025-11-01 00:30:31.882 [INFO][4106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72d0940ade0 ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Namespace="calico-system" Pod="calico-kube-controllers-749d6dfb67-b9g5c" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.935398 containerd[1462]: 2025-11-01 00:30:31.900 [INFO][4106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Namespace="calico-system" Pod="calico-kube-controllers-749d6dfb67-b9g5c" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.935398 containerd[1462]: 2025-11-01 00:30:31.903 [INFO][4106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Namespace="calico-system" Pod="calico-kube-controllers-749d6dfb67-b9g5c" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0", GenerateName:"calico-kube-controllers-749d6dfb67-", Namespace:"calico-system", SelfLink:"", UID:"eb24b203-bba2-4a68-ac20-bbf747c87903", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749d6dfb67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c", Pod:"calico-kube-controllers-749d6dfb67-b9g5c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72d0940ade0", MAC:"de:60:bf:df:2c:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:31.935398 containerd[1462]: 2025-11-01 00:30:31.930 [INFO][4106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c" Namespace="calico-system" Pod="calico-kube-controllers-749d6dfb67-b9g5c" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:31.982973 containerd[1462]: time="2025-11-01T00:30:31.981589499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:31.982973 containerd[1462]: time="2025-11-01T00:30:31.981758990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:31.982973 containerd[1462]: time="2025-11-01T00:30:31.981801545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:31.982973 containerd[1462]: time="2025-11-01T00:30:31.981986253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:32.010349 systemd-networkd[1357]: cali86ec928963f: Link UP Nov 1 00:30:32.016514 systemd-networkd[1357]: cali86ec928963f: Gained carrier Nov 1 00:30:32.044849 systemd[1]: Started cri-containerd-c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c.scope - libcontainer container c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c. Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.676 [INFO][4097] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.704 [INFO][4097] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0 csi-node-driver- calico-system f2c53676-0b50-4c2c-9234-572240cab45e 916 0 2025-11-01 00:30:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84 csi-node-driver-9v2bt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali86ec928963f [] [] }} ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Namespace="calico-system" Pod="csi-node-driver-9v2bt" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.707 [INFO][4097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Namespace="calico-system" Pod="csi-node-driver-9v2bt" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.791 [INFO][4123] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" HandleID="k8s-pod-network.776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.795 [INFO][4123] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" HandleID="k8s-pod-network.776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000421cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", "pod":"csi-node-driver-9v2bt", "timestamp":"2025-11-01 00:30:31.79175592 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.795 [INFO][4123] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.878 [INFO][4123] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.878 [INFO][4123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.924 [INFO][4123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.941 [INFO][4123] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.950 [INFO][4123] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.954 [INFO][4123] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.957 [INFO][4123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.957 [INFO][4123] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.959 [INFO][4123] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.973 [INFO][4123] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.990 [INFO][4123] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.3/26] block=192.168.43.0/26 handle="k8s-pod-network.776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.990 [INFO][4123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.3/26] handle="k8s-pod-network.776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.990 [INFO][4123] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:32.060440 containerd[1462]: 2025-11-01 00:30:31.990 [INFO][4123] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.3/26] IPv6=[] ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" HandleID="k8s-pod-network.776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:32.061831 containerd[1462]: 2025-11-01 00:30:31.999 [INFO][4097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Namespace="calico-system" Pod="csi-node-driver-9v2bt" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2c53676-0b50-4c2c-9234-572240cab45e", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"", Pod:"csi-node-driver-9v2bt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86ec928963f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:32.061831 containerd[1462]: 2025-11-01 00:30:32.000 [INFO][4097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.3/32] ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Namespace="calico-system" Pod="csi-node-driver-9v2bt" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:32.061831 containerd[1462]: 2025-11-01 00:30:32.000 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86ec928963f ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Namespace="calico-system" Pod="csi-node-driver-9v2bt" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:32.061831 containerd[1462]: 2025-11-01 00:30:32.018 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Namespace="calico-system" Pod="csi-node-driver-9v2bt" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:32.061831 containerd[1462]: 2025-11-01 00:30:32.020 [INFO][4097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Namespace="calico-system" Pod="csi-node-driver-9v2bt" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2c53676-0b50-4c2c-9234-572240cab45e", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb", Pod:"csi-node-driver-9v2bt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86ec928963f", MAC:"5a:10:04:76:0d:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:32.061831 containerd[1462]: 2025-11-01 00:30:32.052 [INFO][4097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb" Namespace="calico-system" Pod="csi-node-driver-9v2bt" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:32.108855 containerd[1462]: time="2025-11-01T00:30:32.108652401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:32.112136 containerd[1462]: time="2025-11-01T00:30:32.110267582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:32.112136 containerd[1462]: time="2025-11-01T00:30:32.110301055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:32.112136 containerd[1462]: time="2025-11-01T00:30:32.110421917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:32.158237 systemd[1]: Started cri-containerd-776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb.scope - libcontainer container 776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb. Nov 1 00:30:32.227607 containerd[1462]: time="2025-11-01T00:30:32.227529141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749d6dfb67-b9g5c,Uid:eb24b203-bba2-4a68-ac20-bbf747c87903,Namespace:calico-system,Attempt:1,} returns sandbox id \"c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c\"" Nov 1 00:30:32.232935 containerd[1462]: time="2025-11-01T00:30:32.232123554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:30:32.245989 containerd[1462]: time="2025-11-01T00:30:32.245949166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9v2bt,Uid:f2c53676-0b50-4c2c-9234-572240cab45e,Namespace:calico-system,Attempt:1,} returns sandbox id \"776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb\"" Nov 1 00:30:32.307464 containerd[1462]: time="2025-11-01T00:30:32.306590657Z" level=info msg="StopPodSandbox for \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\"" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.384 [INFO][4236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.384 [INFO][4236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" iface="eth0" netns="/var/run/netns/cni-04bd9cbf-baee-f121-5aa1-f98d859e7c57" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.386 [INFO][4236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" iface="eth0" netns="/var/run/netns/cni-04bd9cbf-baee-f121-5aa1-f98d859e7c57" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.387 [INFO][4236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" iface="eth0" netns="/var/run/netns/cni-04bd9cbf-baee-f121-5aa1-f98d859e7c57" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.387 [INFO][4236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.387 [INFO][4236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.424 [INFO][4243] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" HandleID="k8s-pod-network.d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.424 [INFO][4243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.424 [INFO][4243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.435 [WARNING][4243] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" HandleID="k8s-pod-network.d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.435 [INFO][4243] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" HandleID="k8s-pod-network.d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.437 [INFO][4243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:32.442353 containerd[1462]: 2025-11-01 00:30:32.439 [INFO][4236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:32.444488 containerd[1462]: time="2025-11-01T00:30:32.443431843Z" level=info msg="TearDown network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\" successfully" Nov 1 00:30:32.444488 containerd[1462]: time="2025-11-01T00:30:32.443475164Z" level=info msg="StopPodSandbox for \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\" returns successfully" Nov 1 00:30:32.444888 containerd[1462]: time="2025-11-01T00:30:32.444668008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cjssg,Uid:73f63e7d-cd05-453e-9fac-681616f1563c,Namespace:calico-system,Attempt:1,}" Nov 1 00:30:32.448000 containerd[1462]: time="2025-11-01T00:30:32.447902838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:32.449919 containerd[1462]: time="2025-11-01T00:30:32.449641121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:30:32.449919 containerd[1462]: time="2025-11-01T00:30:32.449736479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:30:32.451739 kubelet[2549]: E1101 00:30:32.451268 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:30:32.451739 kubelet[2549]: E1101 00:30:32.451445 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:30:32.452751 kubelet[2549]: E1101 00:30:32.452337 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrrqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-749d6dfb67-b9g5c_calico-system(eb24b203-bba2-4a68-ac20-bbf747c87903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:32.453215 containerd[1462]: time="2025-11-01T00:30:32.451946140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:30:32.454377 kubelet[2549]: E1101 00:30:32.454223 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:30:32.567188 systemd[1]: run-netns-cni\x2d04bd9cbf\x2dbaee\x2df121\x2d5aa1\x2df98d859e7c57.mount: Deactivated successfully. Nov 1 00:30:32.618580 kubelet[2549]: E1101 00:30:32.616993 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:30:32.707153 containerd[1462]: time="2025-11-01T00:30:32.706789889Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:32.714254 containerd[1462]: time="2025-11-01T00:30:32.713968700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:30:32.714254 containerd[1462]: time="2025-11-01T00:30:32.714203050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:30:32.715051 kubelet[2549]: E1101 00:30:32.714661 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:30:32.715051 kubelet[2549]: E1101 00:30:32.714717 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:30:32.715051 kubelet[2549]: E1101 00:30:32.714863 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjk5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9v2bt_calico-system(f2c53676-0b50-4c2c-9234-572240cab45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:32.718683 containerd[1462]: time="2025-11-01T00:30:32.718264171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:30:32.756609 systemd-networkd[1357]: calica63580f8c0: Link UP Nov 1 00:30:32.758941 systemd-networkd[1357]: calica63580f8c0: Gained carrier Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.509 [INFO][4250] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.526 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0 goldmane-666569f655- calico-system 73f63e7d-cd05-453e-9fac-681616f1563c 929 0 2025-11-01 00:30:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84 goldmane-666569f655-cjssg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calica63580f8c0 [] [] }} ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Namespace="calico-system" Pod="goldmane-666569f655-cjssg" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.526 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Namespace="calico-system" Pod="goldmane-666569f655-cjssg" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.659 [INFO][4261] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" HandleID="k8s-pod-network.73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.660 [INFO][4261] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" HandleID="k8s-pod-network.73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", "pod":"goldmane-666569f655-cjssg", "timestamp":"2025-11-01 00:30:32.659615104 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.660 [INFO][4261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.661 [INFO][4261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.661 [INFO][4261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.671 [INFO][4261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.679 [INFO][4261] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.694 [INFO][4261] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.705 [INFO][4261] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.715 [INFO][4261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.715 [INFO][4261] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.720 [INFO][4261] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.732 [INFO][4261] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.742 [INFO][4261] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.4/26] block=192.168.43.0/26 handle="k8s-pod-network.73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.742 [INFO][4261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.4/26] handle="k8s-pod-network.73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.743 [INFO][4261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:32.790543 containerd[1462]: 2025-11-01 00:30:32.743 [INFO][4261] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.4/26] IPv6=[] ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" HandleID="k8s-pod-network.73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.792959 containerd[1462]: 2025-11-01 00:30:32.749 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Namespace="calico-system" Pod="goldmane-666569f655-cjssg" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"73f63e7d-cd05-453e-9fac-681616f1563c", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"", Pod:"goldmane-666569f655-cjssg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica63580f8c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:32.792959 containerd[1462]: 2025-11-01 00:30:32.749 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.4/32] ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Namespace="calico-system" Pod="goldmane-666569f655-cjssg" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.792959 containerd[1462]: 2025-11-01 00:30:32.749 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica63580f8c0 ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Namespace="calico-system" Pod="goldmane-666569f655-cjssg" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.792959 containerd[1462]: 2025-11-01 00:30:32.761 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Namespace="calico-system" Pod="goldmane-666569f655-cjssg" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.792959 containerd[1462]: 2025-11-01 00:30:32.761 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Namespace="calico-system" Pod="goldmane-666569f655-cjssg" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"73f63e7d-cd05-453e-9fac-681616f1563c", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff", Pod:"goldmane-666569f655-cjssg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica63580f8c0", MAC:"12:db:16:7f:67:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:32.792959 containerd[1462]: 2025-11-01 00:30:32.787 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff" Namespace="calico-system" Pod="goldmane-666569f655-cjssg" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:32.851151 containerd[1462]: time="2025-11-01T00:30:32.849068808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:32.851151 containerd[1462]: time="2025-11-01T00:30:32.849159067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:32.851151 containerd[1462]: time="2025-11-01T00:30:32.849196074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:32.851151 containerd[1462]: time="2025-11-01T00:30:32.849407071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:32.905229 systemd[1]: Started cri-containerd-73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff.scope - libcontainer container 73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff. Nov 1 00:30:33.028857 containerd[1462]: time="2025-11-01T00:30:33.028375739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cjssg,Uid:73f63e7d-cd05-453e-9fac-681616f1563c,Namespace:calico-system,Attempt:1,} returns sandbox id \"73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff\"" Nov 1 00:30:33.028857 containerd[1462]: time="2025-11-01T00:30:33.028593296Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:33.032997 containerd[1462]: time="2025-11-01T00:30:33.032837200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:30:33.033160 containerd[1462]: time="2025-11-01T00:30:33.032979615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:30:33.034740 kubelet[2549]: E1101 00:30:33.033482 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:30:33.034740 kubelet[2549]: E1101 00:30:33.033535 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:30:33.034740 kubelet[2549]: E1101 00:30:33.033781 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjk5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9v2bt_calico-system(f2c53676-0b50-4c2c-9234-572240cab45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:33.035482 containerd[1462]: time="2025-11-01T00:30:33.034315863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:30:33.036456 kubelet[2549]: E1101 00:30:33.035715 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:33.190656 systemd-networkd[1357]: cali72d0940ade0: Gained IPv6LL Nov 1 00:30:33.237071 containerd[1462]: time="2025-11-01T00:30:33.236768776Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:33.238673 containerd[1462]: time="2025-11-01T00:30:33.238459324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:30:33.238673 containerd[1462]: time="2025-11-01T00:30:33.238587193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:30:33.238886 kubelet[2549]: E1101 00:30:33.238821 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:30:33.238961 kubelet[2549]: E1101 00:30:33.238894 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:30:33.239199 kubelet[2549]: E1101 00:30:33.239116 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p65gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cjssg_calico-system(73f63e7d-cd05-453e-9fac-681616f1563c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:33.240964 kubelet[2549]: E1101 00:30:33.240663 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:30:33.305091 containerd[1462]: time="2025-11-01T00:30:33.304034938Z" level=info msg="StopPodSandbox for \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\"" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.370 [INFO][4345] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.371 [INFO][4345] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" iface="eth0" netns="/var/run/netns/cni-b17cb26a-a291-8bcb-ace6-9692800315f7" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.371 [INFO][4345] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" iface="eth0" netns="/var/run/netns/cni-b17cb26a-a291-8bcb-ace6-9692800315f7" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.374 [INFO][4345] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" iface="eth0" netns="/var/run/netns/cni-b17cb26a-a291-8bcb-ace6-9692800315f7" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.374 [INFO][4345] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.374 [INFO][4345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.409 [INFO][4353] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" HandleID="k8s-pod-network.9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.409 [INFO][4353] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.410 [INFO][4353] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.420 [WARNING][4353] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" HandleID="k8s-pod-network.9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.420 [INFO][4353] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" HandleID="k8s-pod-network.9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.422 [INFO][4353] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:33.425341 containerd[1462]: 2025-11-01 00:30:33.423 [INFO][4345] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:33.429346 containerd[1462]: time="2025-11-01T00:30:33.429163595Z" level=info msg="TearDown network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\" successfully" Nov 1 00:30:33.429346 containerd[1462]: time="2025-11-01T00:30:33.429206829Z" level=info msg="StopPodSandbox for \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\" returns successfully" Nov 1 00:30:33.431179 containerd[1462]: time="2025-11-01T00:30:33.430046745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb458b5b7-htf5s,Uid:f01ebb62-cbae-4771-a12a-33c798f125cd,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:30:33.434106 systemd[1]: run-netns-cni\x2db17cb26a\x2da291\x2d8bcb\x2dace6\x2d9692800315f7.mount: Deactivated successfully. Nov 1 00:30:33.585269 systemd-networkd[1357]: califa2f99d1cfe: Link UP Nov 1 00:30:33.585608 systemd-networkd[1357]: califa2f99d1cfe: Gained carrier Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.483 [INFO][4359] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.496 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0 calico-apiserver-7bb458b5b7- calico-apiserver f01ebb62-cbae-4771-a12a-33c798f125cd 949 0 2025-11-01 00:30:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb458b5b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84 calico-apiserver-7bb458b5b7-htf5s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califa2f99d1cfe [] [] }} ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-htf5s" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.496 [INFO][4359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-htf5s" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.527 [INFO][4371] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" HandleID="k8s-pod-network.b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.527 [INFO][4371] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" HandleID="k8s-pod-network.b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", "pod":"calico-apiserver-7bb458b5b7-htf5s", "timestamp":"2025-11-01 00:30:33.527684159 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.527 [INFO][4371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.528 [INFO][4371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.528 [INFO][4371] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.537 [INFO][4371] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.545 [INFO][4371] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.554 [INFO][4371] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.556 [INFO][4371] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.559 [INFO][4371] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.559 [INFO][4371] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.560 [INFO][4371] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1 Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.566 [INFO][4371] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.577 [INFO][4371] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.5/26] block=192.168.43.0/26 handle="k8s-pod-network.b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.577 [INFO][4371] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.5/26] handle="k8s-pod-network.b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.577 [INFO][4371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:33.610096 containerd[1462]: 2025-11-01 00:30:33.577 [INFO][4371] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.5/26] IPv6=[] ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" HandleID="k8s-pod-network.b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.611490 containerd[1462]: 2025-11-01 00:30:33.579 [INFO][4359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-htf5s" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0", GenerateName:"calico-apiserver-7bb458b5b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f01ebb62-cbae-4771-a12a-33c798f125cd", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb458b5b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"", Pod:"calico-apiserver-7bb458b5b7-htf5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa2f99d1cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:33.611490 containerd[1462]: 2025-11-01 00:30:33.579 [INFO][4359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.5/32] ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-htf5s" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.611490 containerd[1462]: 2025-11-01 00:30:33.579 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa2f99d1cfe ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-htf5s" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.611490 containerd[1462]: 2025-11-01 00:30:33.582 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-htf5s" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.611490 containerd[1462]: 2025-11-01 00:30:33.582 [INFO][4359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-htf5s" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0", GenerateName:"calico-apiserver-7bb458b5b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f01ebb62-cbae-4771-a12a-33c798f125cd", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb458b5b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1", Pod:"calico-apiserver-7bb458b5b7-htf5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa2f99d1cfe", MAC:"3e:eb:7f:c7:ff:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:33.611490 containerd[1462]: 2025-11-01 00:30:33.599 [INFO][4359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-htf5s" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:33.623417 kubelet[2549]: E1101 00:30:33.622751 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:30:33.623417 kubelet[2549]: E1101 00:30:33.622980 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:30:33.625790 kubelet[2549]: E1101 00:30:33.625132 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:33.640166 systemd-networkd[1357]: cali86ec928963f: Gained IPv6LL Nov 1 00:30:33.650420 containerd[1462]: time="2025-11-01T00:30:33.650244173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:33.650837 containerd[1462]: time="2025-11-01T00:30:33.650702264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:33.651075 containerd[1462]: time="2025-11-01T00:30:33.650990687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:33.653071 containerd[1462]: time="2025-11-01T00:30:33.652745570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:33.707246 systemd[1]: Started cri-containerd-b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1.scope - libcontainer container b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1. Nov 1 00:30:33.805244 containerd[1462]: time="2025-11-01T00:30:33.805194052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb458b5b7-htf5s,Uid:f01ebb62-cbae-4771-a12a-33c798f125cd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1\"" Nov 1 00:30:33.807716 containerd[1462]: time="2025-11-01T00:30:33.807587434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:30:34.009543 containerd[1462]: time="2025-11-01T00:30:34.009406940Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:34.014209 containerd[1462]: time="2025-11-01T00:30:34.013989360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:30:34.014209 containerd[1462]: time="2025-11-01T00:30:34.014074360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:30:34.014423 kubelet[2549]: E1101 00:30:34.014380 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:30:34.014498 kubelet[2549]: E1101 00:30:34.014446 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:30:34.014715 kubelet[2549]: E1101 00:30:34.014628 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9btkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb458b5b7-htf5s_calico-apiserver(f01ebb62-cbae-4771-a12a-33c798f125cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:34.016447 kubelet[2549]: E1101 00:30:34.016295 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:30:34.406338 systemd-networkd[1357]: calica63580f8c0: Gained IPv6LL Nov 1 00:30:34.626503 kubelet[2549]: E1101 00:30:34.626424 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:30:34.627955 kubelet[2549]: E1101 00:30:34.626859 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:30:35.303483 containerd[1462]: time="2025-11-01T00:30:35.303232125Z" level=info msg="StopPodSandbox for \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\"" Nov 1 00:30:35.303327 systemd-networkd[1357]: califa2f99d1cfe: Gained IPv6LL Nov 1 00:30:35.306918 containerd[1462]: time="2025-11-01T00:30:35.303484350Z" level=info msg="StopPodSandbox for \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\"" Nov 1 00:30:35.311450 containerd[1462]: time="2025-11-01T00:30:35.311403488Z" level=info msg="StopPodSandbox for \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\"" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.495 [INFO][4494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.495 [INFO][4494] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" iface="eth0" netns="/var/run/netns/cni-4273cc63-be46-a101-f0bd-d160f4de9f8e" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.495 [INFO][4494] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" iface="eth0" netns="/var/run/netns/cni-4273cc63-be46-a101-f0bd-d160f4de9f8e" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.495 [INFO][4494] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" iface="eth0" netns="/var/run/netns/cni-4273cc63-be46-a101-f0bd-d160f4de9f8e" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.495 [INFO][4494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.495 [INFO][4494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.585 [INFO][4517] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" HandleID="k8s-pod-network.5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.588 [INFO][4517] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.588 [INFO][4517] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.606 [WARNING][4517] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" HandleID="k8s-pod-network.5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.606 [INFO][4517] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" HandleID="k8s-pod-network.5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.612 [INFO][4517] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:35.621100 containerd[1462]: 2025-11-01 00:30:35.616 [INFO][4494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:35.627840 containerd[1462]: time="2025-11-01T00:30:35.623208215Z" level=info msg="TearDown network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\" successfully" Nov 1 00:30:35.627840 containerd[1462]: time="2025-11-01T00:30:35.623256642Z" level=info msg="StopPodSandbox for \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\" returns successfully" Nov 1 00:30:35.630380 containerd[1462]: time="2025-11-01T00:30:35.630315375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pd8zf,Uid:37d7183e-e34d-4dec-b261-c74c0840b2de,Namespace:kube-system,Attempt:1,}" Nov 1 00:30:35.638434 kubelet[2549]: E1101 00:30:35.635483 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:30:35.635605 systemd[1]: run-netns-cni\x2d4273cc63\x2dbe46\x2da101\x2df0bd\x2dd160f4de9f8e.mount: Deactivated successfully. Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.518 [INFO][4495] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.519 [INFO][4495] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" iface="eth0" netns="/var/run/netns/cni-9ec4e202-d457-ba1a-3093-e5352a2f914c" Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.520 [INFO][4495] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" iface="eth0" netns="/var/run/netns/cni-9ec4e202-d457-ba1a-3093-e5352a2f914c" Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.521 [INFO][4495] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" iface="eth0" netns="/var/run/netns/cni-9ec4e202-d457-ba1a-3093-e5352a2f914c" Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.521 [INFO][4495] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.521 [INFO][4495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.642 [INFO][4527] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" HandleID="k8s-pod-network.dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.642 [INFO][4527] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.644 [INFO][4527] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.676 [WARNING][4527] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" HandleID="k8s-pod-network.dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.676 [INFO][4527] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" HandleID="k8s-pod-network.dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.680 [INFO][4527] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:35.697597 containerd[1462]: 2025-11-01 00:30:35.687 [INFO][4495] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:35.700581 containerd[1462]: time="2025-11-01T00:30:35.697894765Z" level=info msg="TearDown network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\" successfully" Nov 1 00:30:35.700581 containerd[1462]: time="2025-11-01T00:30:35.698729503Z" level=info msg="StopPodSandbox for \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\" returns successfully" Nov 1 00:30:35.701507 containerd[1462]: time="2025-11-01T00:30:35.701466108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb458b5b7-gxqtr,Uid:ce0ad95a-ccba-4cd4-91a4-5a94be968da8,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:30:35.707216 systemd[1]: run-netns-cni\x2d9ec4e202\x2dd457\x2dba1a\x2d3093\x2de5352a2f914c.mount: Deactivated successfully. Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.510 [INFO][4483] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.510 [INFO][4483] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" iface="eth0" netns="/var/run/netns/cni-8e554887-4ac0-9891-a7a7-f9cd9d8ffa4a" Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.511 [INFO][4483] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" iface="eth0" netns="/var/run/netns/cni-8e554887-4ac0-9891-a7a7-f9cd9d8ffa4a" Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.512 [INFO][4483] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" iface="eth0" netns="/var/run/netns/cni-8e554887-4ac0-9891-a7a7-f9cd9d8ffa4a" Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.512 [INFO][4483] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.512 [INFO][4483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.654 [INFO][4522] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" HandleID="k8s-pod-network.c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.654 [INFO][4522] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.681 [INFO][4522] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.717 [WARNING][4522] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" HandleID="k8s-pod-network.c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.717 [INFO][4522] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" HandleID="k8s-pod-network.c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.720 [INFO][4522] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:35.726553 containerd[1462]: 2025-11-01 00:30:35.723 [INFO][4483] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:35.728199 containerd[1462]: time="2025-11-01T00:30:35.728141801Z" level=info msg="TearDown network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\" successfully" Nov 1 00:30:35.728499 containerd[1462]: time="2025-11-01T00:30:35.728380815Z" level=info msg="StopPodSandbox for \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\" returns successfully" Nov 1 00:30:35.730588 containerd[1462]: time="2025-11-01T00:30:35.730543944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jbcpc,Uid:2e557abe-a350-4983-a5a3-ea11db3910b6,Namespace:kube-system,Attempt:1,}" Nov 1 00:30:35.741981 systemd[1]: run-netns-cni\x2d8e554887\x2d4ac0\x2d9891\x2da7a7\x2df9cd9d8ffa4a.mount: Deactivated successfully. Nov 1 00:30:36.049617 systemd-networkd[1357]: cali1aef8258f5a: Link UP Nov 1 00:30:36.058184 systemd-networkd[1357]: cali1aef8258f5a: Gained carrier Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.840 [INFO][4549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.877 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0 calico-apiserver-7bb458b5b7- calico-apiserver ce0ad95a-ccba-4cd4-91a4-5a94be968da8 986 0 2025-11-01 00:30:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb458b5b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84 calico-apiserver-7bb458b5b7-gxqtr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1aef8258f5a [] [] }} ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-gxqtr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.877 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-gxqtr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.959 [INFO][4583] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" HandleID="k8s-pod-network.ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.960 [INFO][4583] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" HandleID="k8s-pod-network.ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024faf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", "pod":"calico-apiserver-7bb458b5b7-gxqtr", "timestamp":"2025-11-01 00:30:35.959411985 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.960 [INFO][4583] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.960 [INFO][4583] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.961 [INFO][4583] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.983 [INFO][4583] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:35.992 [INFO][4583] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.005 [INFO][4583] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.010 [INFO][4583] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.014 [INFO][4583] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.014 [INFO][4583] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.018 [INFO][4583] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2 Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.025 [INFO][4583] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.036 [INFO][4583] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.6/26] block=192.168.43.0/26 handle="k8s-pod-network.ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.036 [INFO][4583] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.6/26] handle="k8s-pod-network.ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.036 [INFO][4583] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:36.093945 containerd[1462]: 2025-11-01 00:30:36.036 [INFO][4583] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.6/26] IPv6=[] ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" HandleID="k8s-pod-network.ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:36.095172 containerd[1462]: 2025-11-01 00:30:36.041 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-gxqtr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0", GenerateName:"calico-apiserver-7bb458b5b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce0ad95a-ccba-4cd4-91a4-5a94be968da8", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb458b5b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"", Pod:"calico-apiserver-7bb458b5b7-gxqtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aef8258f5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:36.095172 containerd[1462]: 2025-11-01 00:30:36.042 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.6/32] ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-gxqtr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:36.095172 containerd[1462]: 2025-11-01 00:30:36.042 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1aef8258f5a ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-gxqtr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:36.095172 containerd[1462]: 2025-11-01 00:30:36.045 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-gxqtr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:36.095172 containerd[1462]: 2025-11-01 00:30:36.046 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-gxqtr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0", GenerateName:"calico-apiserver-7bb458b5b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce0ad95a-ccba-4cd4-91a4-5a94be968da8", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb458b5b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2", Pod:"calico-apiserver-7bb458b5b7-gxqtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aef8258f5a", MAC:"d6:83:cf:64:9a:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:36.095172 containerd[1462]: 2025-11-01 00:30:36.083 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2" Namespace="calico-apiserver" Pod="calico-apiserver-7bb458b5b7-gxqtr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:36.162192 containerd[1462]: time="2025-11-01T00:30:36.161819384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:36.162192 containerd[1462]: time="2025-11-01T00:30:36.161909272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:36.162192 containerd[1462]: time="2025-11-01T00:30:36.161928178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:36.162192 containerd[1462]: time="2025-11-01T00:30:36.162069563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:36.187505 systemd-networkd[1357]: calid4419f8cdd5: Link UP Nov 1 00:30:36.187945 systemd-networkd[1357]: calid4419f8cdd5: Gained carrier Nov 1 00:30:36.217657 systemd[1]: Started cri-containerd-ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2.scope - libcontainer container ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2. Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:35.798 [INFO][4538] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:35.835 [INFO][4538] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0 coredns-668d6bf9bc- kube-system 37d7183e-e34d-4dec-b261-c74c0840b2de 984 0 2025-11-01 00:29:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84 coredns-668d6bf9bc-pd8zf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid4419f8cdd5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Namespace="kube-system" Pod="coredns-668d6bf9bc-pd8zf" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:35.835 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Namespace="kube-system" Pod="coredns-668d6bf9bc-pd8zf" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:35.994 [INFO][4576] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" HandleID="k8s-pod-network.43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:35.996 [INFO][4576] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" HandleID="k8s-pod-network.43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384180), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", "pod":"coredns-668d6bf9bc-pd8zf", "timestamp":"2025-11-01 00:30:35.994622446 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:35.996 [INFO][4576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.037 [INFO][4576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.037 [INFO][4576] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.088 [INFO][4576] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.111 [INFO][4576] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.126 [INFO][4576] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.128 [INFO][4576] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.133 [INFO][4576] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.133 [INFO][4576] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.139 [INFO][4576] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42 Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.151 [INFO][4576] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.171 [INFO][4576] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.7/26] block=192.168.43.0/26 handle="k8s-pod-network.43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.171 [INFO][4576] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.7/26] handle="k8s-pod-network.43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.171 [INFO][4576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:36.237664 containerd[1462]: 2025-11-01 00:30:36.171 [INFO][4576] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.7/26] IPv6=[] ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" HandleID="k8s-pod-network.43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:36.239706 containerd[1462]: 2025-11-01 00:30:36.176 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Namespace="kube-system" Pod="coredns-668d6bf9bc-pd8zf" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"37d7183e-e34d-4dec-b261-c74c0840b2de", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"", Pod:"coredns-668d6bf9bc-pd8zf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4419f8cdd5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:36.239706 containerd[1462]: 2025-11-01 00:30:36.176 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.7/32] ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Namespace="kube-system" Pod="coredns-668d6bf9bc-pd8zf" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:36.239706 containerd[1462]: 2025-11-01 00:30:36.176 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4419f8cdd5 ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Namespace="kube-system" Pod="coredns-668d6bf9bc-pd8zf" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:36.239706 containerd[1462]: 2025-11-01 00:30:36.198 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Namespace="kube-system" Pod="coredns-668d6bf9bc-pd8zf" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:36.239706 containerd[1462]: 2025-11-01 00:30:36.205 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Namespace="kube-system" Pod="coredns-668d6bf9bc-pd8zf" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"37d7183e-e34d-4dec-b261-c74c0840b2de", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42", Pod:"coredns-668d6bf9bc-pd8zf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4419f8cdd5", MAC:"2e:6b:2b:c8:e4:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:36.239706 containerd[1462]: 2025-11-01 00:30:36.234 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42" Namespace="kube-system" Pod="coredns-668d6bf9bc-pd8zf" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:36.281879 systemd-networkd[1357]: calif1ab08e5d28: Link UP Nov 1 00:30:36.283978 systemd-networkd[1357]: calif1ab08e5d28: Gained carrier Nov 1 00:30:36.296335 kubelet[2549]: I1101 00:30:36.296297 2549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:30:36.328579 containerd[1462]: time="2025-11-01T00:30:36.326629293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:36.328579 containerd[1462]: time="2025-11-01T00:30:36.326743972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:36.333039 containerd[1462]: time="2025-11-01T00:30:36.326823076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:36.333039 containerd[1462]: time="2025-11-01T00:30:36.331919175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:35.901 [INFO][4564] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:35.935 [INFO][4564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0 coredns-668d6bf9bc- kube-system 2e557abe-a350-4983-a5a3-ea11db3910b6 985 0 2025-11-01 00:29:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84 coredns-668d6bf9bc-jbcpc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif1ab08e5d28 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Namespace="kube-system" Pod="coredns-668d6bf9bc-jbcpc" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:35.935 [INFO][4564] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Namespace="kube-system" Pod="coredns-668d6bf9bc-jbcpc" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.014 [INFO][4592] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" HandleID="k8s-pod-network.4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.017 [INFO][4592] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" HandleID="k8s-pod-network.4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e480), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", "pod":"coredns-668d6bf9bc-jbcpc", "timestamp":"2025-11-01 00:30:36.014342902 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.017 [INFO][4592] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.171 [INFO][4592] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.171 [INFO][4592] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84' Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.195 [INFO][4592] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.219 [INFO][4592] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.226 [INFO][4592] ipam/ipam.go 511: Trying affinity for 192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.233 [INFO][4592] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.241 [INFO][4592] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.0/26 host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.241 [INFO][4592] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.0/26 handle="k8s-pod-network.4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.244 [INFO][4592] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163 Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.252 [INFO][4592] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.0/26 handle="k8s-pod-network.4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.269 [INFO][4592] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.8/26] block=192.168.43.0/26 handle="k8s-pod-network.4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.270 [INFO][4592] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.8/26] handle="k8s-pod-network.4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" host="ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84" Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.270 [INFO][4592] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:36.334615 containerd[1462]: 2025-11-01 00:30:36.270 [INFO][4592] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.8/26] IPv6=[] ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" HandleID="k8s-pod-network.4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:36.335951 containerd[1462]: 2025-11-01 00:30:36.277 [INFO][4564] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Namespace="kube-system" Pod="coredns-668d6bf9bc-jbcpc" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e557abe-a350-4983-a5a3-ea11db3910b6", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"", Pod:"coredns-668d6bf9bc-jbcpc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1ab08e5d28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:36.335951 containerd[1462]: 2025-11-01 00:30:36.278 [INFO][4564] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.8/32] ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Namespace="kube-system" Pod="coredns-668d6bf9bc-jbcpc" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:36.335951 containerd[1462]: 2025-11-01 00:30:36.278 [INFO][4564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1ab08e5d28 ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Namespace="kube-system" Pod="coredns-668d6bf9bc-jbcpc" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:36.335951 containerd[1462]: 2025-11-01 00:30:36.283 [INFO][4564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Namespace="kube-system" Pod="coredns-668d6bf9bc-jbcpc" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:36.335951 containerd[1462]: 2025-11-01 00:30:36.284 [INFO][4564] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Namespace="kube-system" Pod="coredns-668d6bf9bc-jbcpc" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e557abe-a350-4983-a5a3-ea11db3910b6", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163", Pod:"coredns-668d6bf9bc-jbcpc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1ab08e5d28", MAC:"46:78:ca:dc:4b:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:36.335951 containerd[1462]: 2025-11-01 00:30:36.315 [INFO][4564] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163" Namespace="kube-system" Pod="coredns-668d6bf9bc-jbcpc" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:36.388427 systemd[1]: Started cri-containerd-43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42.scope - libcontainer container 43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42. Nov 1 00:30:36.415406 containerd[1462]: time="2025-11-01T00:30:36.414801283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:30:36.415406 containerd[1462]: time="2025-11-01T00:30:36.414883564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:30:36.415406 containerd[1462]: time="2025-11-01T00:30:36.414910263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:36.417086 containerd[1462]: time="2025-11-01T00:30:36.415067240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:30:36.470533 systemd[1]: Started cri-containerd-4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163.scope - libcontainer container 4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163. Nov 1 00:30:36.473234 containerd[1462]: time="2025-11-01T00:30:36.472787256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb458b5b7-gxqtr,Uid:ce0ad95a-ccba-4cd4-91a4-5a94be968da8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2\"" Nov 1 00:30:36.478462 containerd[1462]: time="2025-11-01T00:30:36.478404719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:30:36.517629 containerd[1462]: time="2025-11-01T00:30:36.517460876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pd8zf,Uid:37d7183e-e34d-4dec-b261-c74c0840b2de,Namespace:kube-system,Attempt:1,} returns sandbox id \"43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42\"" Nov 1 00:30:36.526700 containerd[1462]: time="2025-11-01T00:30:36.526608299Z" level=info msg="CreateContainer within sandbox \"43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:30:36.548045 containerd[1462]: time="2025-11-01T00:30:36.547137655Z" level=info msg="CreateContainer within sandbox \"43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17e0f26b0a5435d529ac901a3671b017acc3a309fccfb42428dab470da270791\"" Nov 1 00:30:36.551072 containerd[1462]: time="2025-11-01T00:30:36.548931828Z" level=info msg="StartContainer for \"17e0f26b0a5435d529ac901a3671b017acc3a309fccfb42428dab470da270791\"" Nov 1 00:30:36.614197 containerd[1462]: time="2025-11-01T00:30:36.614000029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jbcpc,Uid:2e557abe-a350-4983-a5a3-ea11db3910b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163\"" Nov 1 00:30:36.619313 systemd[1]: Started cri-containerd-17e0f26b0a5435d529ac901a3671b017acc3a309fccfb42428dab470da270791.scope - libcontainer container 17e0f26b0a5435d529ac901a3671b017acc3a309fccfb42428dab470da270791. Nov 1 00:30:36.623045 containerd[1462]: time="2025-11-01T00:30:36.622558030Z" level=info msg="CreateContainer within sandbox \"4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:30:36.679529 containerd[1462]: time="2025-11-01T00:30:36.677484736Z" level=info msg="CreateContainer within sandbox \"4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2d8437c15d56ac371caa9f8238774ddd1abe08d658871991ac83d7554900a15\"" Nov 1 00:30:36.681906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427378577.mount: Deactivated successfully. Nov 1 00:30:36.685525 containerd[1462]: time="2025-11-01T00:30:36.682732481Z" level=info msg="StartContainer for \"d2d8437c15d56ac371caa9f8238774ddd1abe08d658871991ac83d7554900a15\"" Nov 1 00:30:36.686295 containerd[1462]: time="2025-11-01T00:30:36.686234886Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:36.690194 containerd[1462]: time="2025-11-01T00:30:36.690124151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:30:36.690492 kubelet[2549]: E1101 00:30:36.690338 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:30:36.690492 kubelet[2549]: E1101 00:30:36.690394 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:30:36.692414 kubelet[2549]: E1101 00:30:36.690546 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhd88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb458b5b7-gxqtr_calico-apiserver(ce0ad95a-ccba-4cd4-91a4-5a94be968da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:36.692663 containerd[1462]: time="2025-11-01T00:30:36.690430120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:30:36.692757 kubelet[2549]: E1101 00:30:36.692477 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:30:36.740452 containerd[1462]: time="2025-11-01T00:30:36.740400963Z" level=info msg="StartContainer for \"17e0f26b0a5435d529ac901a3671b017acc3a309fccfb42428dab470da270791\" returns successfully" Nov 1 00:30:36.767234 systemd[1]: Started cri-containerd-d2d8437c15d56ac371caa9f8238774ddd1abe08d658871991ac83d7554900a15.scope - libcontainer container d2d8437c15d56ac371caa9f8238774ddd1abe08d658871991ac83d7554900a15. Nov 1 00:30:36.824361 containerd[1462]: time="2025-11-01T00:30:36.824305570Z" level=info msg="StartContainer for \"d2d8437c15d56ac371caa9f8238774ddd1abe08d658871991ac83d7554900a15\" returns successfully" Nov 1 00:30:37.318080 kernel: bpftool[4868]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:30:37.351254 systemd-networkd[1357]: cali1aef8258f5a: Gained IPv6LL Nov 1 00:30:37.542825 systemd-networkd[1357]: calid4419f8cdd5: Gained IPv6LL Nov 1 00:30:37.688138 kubelet[2549]: E1101 00:30:37.687854 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:30:37.705050 kubelet[2549]: I1101 00:30:37.704134 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jbcpc" podStartSLOduration=48.704110618 podStartE2EDuration="48.704110618s" podCreationTimestamp="2025-11-01 00:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:30:37.703825894 +0000 UTC m=+55.570242214" watchObservedRunningTime="2025-11-01 00:30:37.704110618 +0000 UTC m=+55.570526938" Nov 1 00:30:37.820072 systemd-networkd[1357]: vxlan.calico: Link UP Nov 1 00:30:37.820085 systemd-networkd[1357]: vxlan.calico: Gained carrier Nov 1 00:30:38.311237 systemd-networkd[1357]: calif1ab08e5d28: Gained IPv6LL Nov 1 00:30:39.462454 systemd-networkd[1357]: vxlan.calico: Gained IPv6LL Nov 1 00:30:41.305502 containerd[1462]: time="2025-11-01T00:30:41.304597386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:30:41.321903 kubelet[2549]: I1101 00:30:41.320547 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pd8zf" podStartSLOduration=52.320514347 podStartE2EDuration="52.320514347s" podCreationTimestamp="2025-11-01 00:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:30:37.779655461 +0000 UTC m=+55.646071781" watchObservedRunningTime="2025-11-01 00:30:41.320514347 +0000 UTC m=+59.186930665" Nov 1 00:30:41.535728 containerd[1462]: time="2025-11-01T00:30:41.535636662Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:41.537416 containerd[1462]: time="2025-11-01T00:30:41.537350617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:30:41.537543 containerd[1462]: time="2025-11-01T00:30:41.537470013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:30:41.537883 kubelet[2549]: E1101 00:30:41.537818 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:30:41.537994 kubelet[2549]: E1101 00:30:41.537886 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:30:41.538128 kubelet[2549]: E1101 00:30:41.538074 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9354361088054afe9becf34fc1077d69,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjccf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768465fd8d-cxghm_calico-system(34a02912-f185-42ba-a75a-ca30896a4f61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:41.541510 containerd[1462]: time="2025-11-01T00:30:41.541386928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:30:41.747463 containerd[1462]: time="2025-11-01T00:30:41.747262947Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:41.749240 containerd[1462]: time="2025-11-01T00:30:41.748981882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:30:41.750051 containerd[1462]: time="2025-11-01T00:30:41.749157416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:30:41.750190 kubelet[2549]: E1101 00:30:41.749552 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:30:41.750190 kubelet[2549]: E1101 00:30:41.749621 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:30:41.750190 kubelet[2549]: E1101 00:30:41.749788 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjccf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768465fd8d-cxghm_calico-system(34a02912-f185-42ba-a75a-ca30896a4f61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:41.750997 kubelet[2549]: E1101 00:30:41.750935 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768465fd8d-cxghm" podUID="34a02912-f185-42ba-a75a-ca30896a4f61" Nov 1 00:30:41.869574 ntpd[1427]: Listen normally on 7 vxlan.calico 192.168.43.0:123 Nov 1 00:30:41.869712 ntpd[1427]: Listen normally on 8 cali6c2f19fc9ff [fe80::ecee:eeff:feee:eeee%4]:123 Nov 1 00:30:41.870261 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 7 vxlan.calico 192.168.43.0:123 Nov 1 00:30:41.870261 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 8 cali6c2f19fc9ff [fe80::ecee:eeff:feee:eeee%4]:123 Nov 1 00:30:41.870261 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 9 cali72d0940ade0 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 1 00:30:41.870261 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 10 cali86ec928963f [fe80::ecee:eeff:feee:eeee%6]:123 Nov 1 00:30:41.870261 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 11 calica63580f8c0 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 1 00:30:41.870261 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 12 califa2f99d1cfe [fe80::ecee:eeff:feee:eeee%8]:123 Nov 1 00:30:41.870261 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 13 cali1aef8258f5a [fe80::ecee:eeff:feee:eeee%9]:123 Nov 1 00:30:41.870261 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 14 calid4419f8cdd5 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 1 00:30:41.869798 ntpd[1427]: Listen normally on 9 cali72d0940ade0 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 1 00:30:41.870652 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 15 calif1ab08e5d28 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 1 00:30:41.870652 ntpd[1427]: 1 Nov 00:30:41 ntpd[1427]: Listen normally on 16 vxlan.calico [fe80::646f:75ff:fedf:f827%12]:123 Nov 1 00:30:41.869862 ntpd[1427]: Listen normally on 10 cali86ec928963f [fe80::ecee:eeff:feee:eeee%6]:123 Nov 1 00:30:41.869922 ntpd[1427]: Listen normally on 11 calica63580f8c0 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 1 00:30:41.869997 ntpd[1427]: Listen normally on 12 califa2f99d1cfe [fe80::ecee:eeff:feee:eeee%8]:123 Nov 1 00:30:41.870161 ntpd[1427]: Listen normally on 13 cali1aef8258f5a [fe80::ecee:eeff:feee:eeee%9]:123 Nov 1 00:30:41.870221 ntpd[1427]: Listen normally on 14 calid4419f8cdd5 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 1 00:30:41.870295 ntpd[1427]: Listen normally on 15 calif1ab08e5d28 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 1 00:30:41.870357 ntpd[1427]: Listen normally on 16 vxlan.calico [fe80::646f:75ff:fedf:f827%12]:123 Nov 1 00:30:42.260540 containerd[1462]: time="2025-11-01T00:30:42.260474385Z" level=info msg="StopPodSandbox for \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\"" Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.317 [WARNING][4986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0", GenerateName:"calico-apiserver-7bb458b5b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f01ebb62-cbae-4771-a12a-33c798f125cd", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb458b5b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1", Pod:"calico-apiserver-7bb458b5b7-htf5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa2f99d1cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.318 [INFO][4986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.318 [INFO][4986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" iface="eth0" netns="" Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.318 [INFO][4986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.318 [INFO][4986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.351 [INFO][4995] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" HandleID="k8s-pod-network.9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.351 [INFO][4995] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.352 [INFO][4995] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.362 [WARNING][4995] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" HandleID="k8s-pod-network.9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.362 [INFO][4995] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" HandleID="k8s-pod-network.9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.365 [INFO][4995] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:42.368488 containerd[1462]: 2025-11-01 00:30:42.367 [INFO][4986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:42.370801 containerd[1462]: time="2025-11-01T00:30:42.368611873Z" level=info msg="TearDown network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\" successfully" Nov 1 00:30:42.370801 containerd[1462]: time="2025-11-01T00:30:42.368650556Z" level=info msg="StopPodSandbox for \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\" returns successfully" Nov 1 00:30:42.370801 containerd[1462]: time="2025-11-01T00:30:42.369453129Z" level=info msg="RemovePodSandbox for \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\"" Nov 1 00:30:42.370801 containerd[1462]: time="2025-11-01T00:30:42.369494962Z" level=info msg="Forcibly stopping sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\"" Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.449 [WARNING][5010] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0", GenerateName:"calico-apiserver-7bb458b5b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f01ebb62-cbae-4771-a12a-33c798f125cd", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb458b5b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"b44ef8b712667feb432e0e30d19e3777c4f3f18f195e7b4dd79a4c3ae616cfe1", Pod:"calico-apiserver-7bb458b5b7-htf5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa2f99d1cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.449 [INFO][5010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.450 [INFO][5010] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" iface="eth0" netns="" Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.450 [INFO][5010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.450 [INFO][5010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.481 [INFO][5018] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" HandleID="k8s-pod-network.9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.481 [INFO][5018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.481 [INFO][5018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.490 [WARNING][5018] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" HandleID="k8s-pod-network.9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.490 [INFO][5018] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" HandleID="k8s-pod-network.9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--htf5s-eth0" Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.493 [INFO][5018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:42.497899 containerd[1462]: 2025-11-01 00:30:42.495 [INFO][5010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad" Nov 1 00:30:42.500478 containerd[1462]: time="2025-11-01T00:30:42.498897757Z" level=info msg="TearDown network for sandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\" successfully" Nov 1 00:30:42.506172 containerd[1462]: time="2025-11-01T00:30:42.506114579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:30:42.506293 containerd[1462]: time="2025-11-01T00:30:42.506197445Z" level=info msg="RemovePodSandbox \"9d3d3dbeb4c7192505fb3b90d5fa21d02056a91b367b7da459c31421d61ac3ad\" returns successfully" Nov 1 00:30:42.507004 containerd[1462]: time="2025-11-01T00:30:42.506963917Z" level=info msg="StopPodSandbox for \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\"" Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.551 [WARNING][5032] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0", GenerateName:"calico-apiserver-7bb458b5b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce0ad95a-ccba-4cd4-91a4-5a94be968da8", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb458b5b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2", Pod:"calico-apiserver-7bb458b5b7-gxqtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aef8258f5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.552 [INFO][5032] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.552 [INFO][5032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" iface="eth0" netns="" Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.552 [INFO][5032] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.553 [INFO][5032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.583 [INFO][5039] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" HandleID="k8s-pod-network.dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.583 [INFO][5039] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.583 [INFO][5039] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.595 [WARNING][5039] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" HandleID="k8s-pod-network.dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.596 [INFO][5039] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" HandleID="k8s-pod-network.dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.598 [INFO][5039] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:42.601777 containerd[1462]: 2025-11-01 00:30:42.599 [INFO][5032] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:42.602744 containerd[1462]: time="2025-11-01T00:30:42.602677041Z" level=info msg="TearDown network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\" successfully" Nov 1 00:30:42.602744 containerd[1462]: time="2025-11-01T00:30:42.602723731Z" level=info msg="StopPodSandbox for \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\" returns successfully" Nov 1 00:30:42.603540 containerd[1462]: time="2025-11-01T00:30:42.603504239Z" level=info msg="RemovePodSandbox for \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\"" Nov 1 00:30:42.603669 containerd[1462]: time="2025-11-01T00:30:42.603549866Z" level=info msg="Forcibly stopping sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\"" Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.656 [WARNING][5053] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0", GenerateName:"calico-apiserver-7bb458b5b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce0ad95a-ccba-4cd4-91a4-5a94be968da8", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb458b5b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"ab701c18486912c2ce09dc3849360b636769cd43d271b890e16d32c8326bf9d2", Pod:"calico-apiserver-7bb458b5b7-gxqtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aef8258f5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.656 [INFO][5053] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.656 [INFO][5053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" iface="eth0" netns="" Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.656 [INFO][5053] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.656 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.704 [INFO][5060] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" HandleID="k8s-pod-network.dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.705 [INFO][5060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.705 [INFO][5060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.715 [WARNING][5060] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" HandleID="k8s-pod-network.dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.716 [INFO][5060] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" HandleID="k8s-pod-network.dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--apiserver--7bb458b5b7--gxqtr-eth0" Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.717 [INFO][5060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:42.720892 containerd[1462]: 2025-11-01 00:30:42.719 [INFO][5053] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0" Nov 1 00:30:42.721775 containerd[1462]: time="2025-11-01T00:30:42.720931724Z" level=info msg="TearDown network for sandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\" successfully" Nov 1 00:30:42.725777 containerd[1462]: time="2025-11-01T00:30:42.725725522Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:30:42.726085 containerd[1462]: time="2025-11-01T00:30:42.725802733Z" level=info msg="RemovePodSandbox \"dcbc007b0b9f9163c0f0dcbef8e8e6bddfd1c60863dbdc065b45ae9455af4df0\" returns successfully" Nov 1 00:30:42.726370 containerd[1462]: time="2025-11-01T00:30:42.726337821Z" level=info msg="StopPodSandbox for \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\"" Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.769 [WARNING][5076] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.769 [INFO][5076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.769 [INFO][5076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" iface="eth0" netns="" Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.769 [INFO][5076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.769 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.800 [INFO][5083] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" HandleID="k8s-pod-network.5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.800 [INFO][5083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.800 [INFO][5083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.809 [WARNING][5083] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" HandleID="k8s-pod-network.5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.810 [INFO][5083] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" HandleID="k8s-pod-network.5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.812 [INFO][5083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:42.815434 containerd[1462]: 2025-11-01 00:30:42.813 [INFO][5076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:42.815434 containerd[1462]: time="2025-11-01T00:30:42.815283597Z" level=info msg="TearDown network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\" successfully" Nov 1 00:30:42.815434 containerd[1462]: time="2025-11-01T00:30:42.815309647Z" level=info msg="StopPodSandbox for \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\" returns successfully" Nov 1 00:30:42.816540 containerd[1462]: time="2025-11-01T00:30:42.816047778Z" level=info msg="RemovePodSandbox for \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\"" Nov 1 00:30:42.816540 containerd[1462]: time="2025-11-01T00:30:42.816090080Z" level=info msg="Forcibly stopping sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\"" Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.863 [WARNING][5097] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.863 [INFO][5097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.863 [INFO][5097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" iface="eth0" netns="" Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.863 [INFO][5097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.863 [INFO][5097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.892 [INFO][5105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" HandleID="k8s-pod-network.5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.892 [INFO][5105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.892 [INFO][5105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.901 [WARNING][5105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" HandleID="k8s-pod-network.5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.901 [INFO][5105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" HandleID="k8s-pod-network.5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-whisker--967b467f4--qpfxh-eth0" Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.903 [INFO][5105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:42.908131 containerd[1462]: 2025-11-01 00:30:42.904 [INFO][5097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b" Nov 1 00:30:42.908131 containerd[1462]: time="2025-11-01T00:30:42.906105881Z" level=info msg="TearDown network for sandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\" successfully" Nov 1 00:30:42.914075 containerd[1462]: time="2025-11-01T00:30:42.914001708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:30:42.914206 containerd[1462]: time="2025-11-01T00:30:42.914105512Z" level=info msg="RemovePodSandbox \"5b29fd128c8898f505a63d6f049480f6a893453fcfc37759b495cde372b1979b\" returns successfully" Nov 1 00:30:42.914795 containerd[1462]: time="2025-11-01T00:30:42.914758927Z" level=info msg="StopPodSandbox for \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\"" Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.959 [WARNING][5119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e557abe-a350-4983-a5a3-ea11db3910b6", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163", Pod:"coredns-668d6bf9bc-jbcpc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1ab08e5d28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.960 [INFO][5119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.960 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" iface="eth0" netns="" Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.960 [INFO][5119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.960 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.988 [INFO][5126] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" HandleID="k8s-pod-network.c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.988 [INFO][5126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.988 [INFO][5126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.998 [WARNING][5126] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" HandleID="k8s-pod-network.c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:42.998 [INFO][5126] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" HandleID="k8s-pod-network.c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:43.000 [INFO][5126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.003809 containerd[1462]: 2025-11-01 00:30:43.002 [INFO][5119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:43.003809 containerd[1462]: time="2025-11-01T00:30:43.003623894Z" level=info msg="TearDown network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\" successfully" Nov 1 00:30:43.003809 containerd[1462]: time="2025-11-01T00:30:43.003663067Z" level=info msg="StopPodSandbox for \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\" returns successfully" Nov 1 00:30:43.004920 containerd[1462]: time="2025-11-01T00:30:43.004857071Z" level=info msg="RemovePodSandbox for \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\"" Nov 1 00:30:43.004920 containerd[1462]: time="2025-11-01T00:30:43.004904711Z" level=info msg="Forcibly stopping sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\"" Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.049 [WARNING][5140] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e557abe-a350-4983-a5a3-ea11db3910b6", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"4ca74324461619cbd31bb33086d8be9ebdb8000dd192b4024fc11afab4c7e163", Pod:"coredns-668d6bf9bc-jbcpc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1ab08e5d28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.050 [INFO][5140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.050 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" iface="eth0" netns="" Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.050 [INFO][5140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.050 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.076 [INFO][5147] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" HandleID="k8s-pod-network.c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.076 [INFO][5147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.077 [INFO][5147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.085 [WARNING][5147] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" HandleID="k8s-pod-network.c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.086 [INFO][5147] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" HandleID="k8s-pod-network.c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--jbcpc-eth0" Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.088 [INFO][5147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.091274 containerd[1462]: 2025-11-01 00:30:43.089 [INFO][5140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050" Nov 1 00:30:43.092119 containerd[1462]: time="2025-11-01T00:30:43.091318723Z" level=info msg="TearDown network for sandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\" successfully" Nov 1 00:30:43.096828 containerd[1462]: time="2025-11-01T00:30:43.096619077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:30:43.096828 containerd[1462]: time="2025-11-01T00:30:43.096700091Z" level=info msg="RemovePodSandbox \"c2031a5411d1ed2467481e6c85c1209be6be763c1f935117102964b40ba81050\" returns successfully" Nov 1 00:30:43.097966 containerd[1462]: time="2025-11-01T00:30:43.097497850Z" level=info msg="StopPodSandbox for \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\"" Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.149 [WARNING][5161] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0", GenerateName:"calico-kube-controllers-749d6dfb67-", Namespace:"calico-system", SelfLink:"", UID:"eb24b203-bba2-4a68-ac20-bbf747c87903", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749d6dfb67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c", Pod:"calico-kube-controllers-749d6dfb67-b9g5c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72d0940ade0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.150 [INFO][5161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.150 [INFO][5161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" iface="eth0" netns="" Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.150 [INFO][5161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.150 [INFO][5161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.178 [INFO][5169] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" HandleID="k8s-pod-network.78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.178 [INFO][5169] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.178 [INFO][5169] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.187 [WARNING][5169] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" HandleID="k8s-pod-network.78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.187 [INFO][5169] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" HandleID="k8s-pod-network.78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.189 [INFO][5169] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.193057 containerd[1462]: 2025-11-01 00:30:43.191 [INFO][5161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:43.193057 containerd[1462]: time="2025-11-01T00:30:43.193032749Z" level=info msg="TearDown network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\" successfully" Nov 1 00:30:43.195729 containerd[1462]: time="2025-11-01T00:30:43.193072131Z" level=info msg="StopPodSandbox for \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\" returns successfully" Nov 1 00:30:43.195729 containerd[1462]: time="2025-11-01T00:30:43.194741642Z" level=info msg="RemovePodSandbox for \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\"" Nov 1 00:30:43.195729 containerd[1462]: time="2025-11-01T00:30:43.195004134Z" level=info msg="Forcibly stopping sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\"" Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.241 [WARNING][5184] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0", GenerateName:"calico-kube-controllers-749d6dfb67-", Namespace:"calico-system", SelfLink:"", UID:"eb24b203-bba2-4a68-ac20-bbf747c87903", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"749d6dfb67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"c40f6a6ef0853d1ef3127476bba590b0c361a99b0c64acdcb223e1ad81daf03c", Pod:"calico-kube-controllers-749d6dfb67-b9g5c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72d0940ade0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.241 [INFO][5184] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.241 [INFO][5184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" iface="eth0" netns="" Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.241 [INFO][5184] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.241 [INFO][5184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.275 [INFO][5191] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" HandleID="k8s-pod-network.78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.276 [INFO][5191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.276 [INFO][5191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.285 [WARNING][5191] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" HandleID="k8s-pod-network.78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.285 [INFO][5191] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" HandleID="k8s-pod-network.78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-calico--kube--controllers--749d6dfb67--b9g5c-eth0" Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.287 [INFO][5191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.290686 containerd[1462]: 2025-11-01 00:30:43.289 [INFO][5184] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd" Nov 1 00:30:43.291954 containerd[1462]: time="2025-11-01T00:30:43.290733024Z" level=info msg="TearDown network for sandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\" successfully" Nov 1 00:30:43.296556 containerd[1462]: time="2025-11-01T00:30:43.296482216Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:30:43.297103 containerd[1462]: time="2025-11-01T00:30:43.296564945Z" level=info msg="RemovePodSandbox \"78f5dfeec4450b02b3f4b4a55481ccee402b1dd3ebab74cdf2027f41c02bb8dd\" returns successfully" Nov 1 00:30:43.297838 containerd[1462]: time="2025-11-01T00:30:43.297426313Z" level=info msg="StopPodSandbox for \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\"" Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.345 [WARNING][5205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"73f63e7d-cd05-453e-9fac-681616f1563c", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff", Pod:"goldmane-666569f655-cjssg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica63580f8c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.347 [INFO][5205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.347 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" iface="eth0" netns="" Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.347 [INFO][5205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.347 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.379 [INFO][5212] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" HandleID="k8s-pod-network.d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.379 [INFO][5212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.379 [INFO][5212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.388 [WARNING][5212] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" HandleID="k8s-pod-network.d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.388 [INFO][5212] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" HandleID="k8s-pod-network.d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.390 [INFO][5212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.393639 containerd[1462]: 2025-11-01 00:30:43.392 [INFO][5205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:43.394590 containerd[1462]: time="2025-11-01T00:30:43.393682982Z" level=info msg="TearDown network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\" successfully" Nov 1 00:30:43.394590 containerd[1462]: time="2025-11-01T00:30:43.393713756Z" level=info msg="StopPodSandbox for \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\" returns successfully" Nov 1 00:30:43.395318 containerd[1462]: time="2025-11-01T00:30:43.395265603Z" level=info msg="RemovePodSandbox for \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\"" Nov 1 00:30:43.395318 containerd[1462]: time="2025-11-01T00:30:43.395310799Z" level=info msg="Forcibly stopping sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\"" Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.448 [WARNING][5226] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"73f63e7d-cd05-453e-9fac-681616f1563c", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"73675f2e700f077f9383df5a01c8c9396ce4fd0dfb6b3b5ce18339736afb2dff", Pod:"goldmane-666569f655-cjssg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica63580f8c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.448 [INFO][5226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.448 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" iface="eth0" netns="" Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.448 [INFO][5226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.448 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.487 [INFO][5235] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" HandleID="k8s-pod-network.d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.488 [INFO][5235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.488 [INFO][5235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.496 [WARNING][5235] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" HandleID="k8s-pod-network.d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.496 [INFO][5235] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" HandleID="k8s-pod-network.d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-goldmane--666569f655--cjssg-eth0" Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.498 [INFO][5235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.502794 containerd[1462]: 2025-11-01 00:30:43.500 [INFO][5226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65" Nov 1 00:30:43.502794 containerd[1462]: time="2025-11-01T00:30:43.502759161Z" level=info msg="TearDown network for sandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\" successfully" Nov 1 00:30:43.510801 containerd[1462]: time="2025-11-01T00:30:43.510234791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:30:43.510801 containerd[1462]: time="2025-11-01T00:30:43.510358410Z" level=info msg="RemovePodSandbox \"d1e6bd4f259bab4af9db3af794facbc4ebc53ba08d6affbaeb5dca3b51b7ad65\" returns successfully" Nov 1 00:30:43.512463 containerd[1462]: time="2025-11-01T00:30:43.512344895Z" level=info msg="StopPodSandbox for \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\"" Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.556 [WARNING][5256] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2c53676-0b50-4c2c-9234-572240cab45e", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb", Pod:"csi-node-driver-9v2bt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86ec928963f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.557 [INFO][5256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.557 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" iface="eth0" netns="" Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.557 [INFO][5256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.557 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.584 [INFO][5264] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" HandleID="k8s-pod-network.f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.584 [INFO][5264] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.584 [INFO][5264] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.592 [WARNING][5264] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" HandleID="k8s-pod-network.f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.592 [INFO][5264] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" HandleID="k8s-pod-network.f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.594 [INFO][5264] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.597458 containerd[1462]: 2025-11-01 00:30:43.595 [INFO][5256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:43.598567 containerd[1462]: time="2025-11-01T00:30:43.597530962Z" level=info msg="TearDown network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\" successfully" Nov 1 00:30:43.598567 containerd[1462]: time="2025-11-01T00:30:43.597564259Z" level=info msg="StopPodSandbox for \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\" returns successfully" Nov 1 00:30:43.599230 containerd[1462]: time="2025-11-01T00:30:43.599129180Z" level=info msg="RemovePodSandbox for \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\"" Nov 1 00:30:43.599230 containerd[1462]: time="2025-11-01T00:30:43.599206075Z" level=info msg="Forcibly stopping sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\"" Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.644 [WARNING][5278] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2c53676-0b50-4c2c-9234-572240cab45e", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 30, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"776bf6d19ea460a1bfd1ad0cfb3a8a17c948e12c73e9e8d9c638537859d490cb", Pod:"csi-node-driver-9v2bt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali86ec928963f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.644 [INFO][5278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.644 [INFO][5278] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" iface="eth0" netns="" Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.644 [INFO][5278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.644 [INFO][5278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.672 [INFO][5286] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" HandleID="k8s-pod-network.f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.672 [INFO][5286] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.673 [INFO][5286] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.680 [WARNING][5286] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" HandleID="k8s-pod-network.f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.680 [INFO][5286] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" HandleID="k8s-pod-network.f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-csi--node--driver--9v2bt-eth0" Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.682 [INFO][5286] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.686104 containerd[1462]: 2025-11-01 00:30:43.683 [INFO][5278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d" Nov 1 00:30:43.686104 containerd[1462]: time="2025-11-01T00:30:43.685504626Z" level=info msg="TearDown network for sandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\" successfully" Nov 1 00:30:43.691191 containerd[1462]: time="2025-11-01T00:30:43.691133499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:30:43.691303 containerd[1462]: time="2025-11-01T00:30:43.691221813Z" level=info msg="RemovePodSandbox \"f56464f65d621165abacce7609ff35b44c8193e3d824a36347a3c4413471af4d\" returns successfully" Nov 1 00:30:43.692182 containerd[1462]: time="2025-11-01T00:30:43.692144239Z" level=info msg="StopPodSandbox for \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\"" Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.741 [WARNING][5300] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"37d7183e-e34d-4dec-b261-c74c0840b2de", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42", Pod:"coredns-668d6bf9bc-pd8zf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4419f8cdd5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.741 [INFO][5300] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.741 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" iface="eth0" netns="" Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.741 [INFO][5300] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.741 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.781 [INFO][5307] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" HandleID="k8s-pod-network.5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.781 [INFO][5307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.782 [INFO][5307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.789 [WARNING][5307] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" HandleID="k8s-pod-network.5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.789 [INFO][5307] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" HandleID="k8s-pod-network.5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.790 [INFO][5307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.793565 containerd[1462]: 2025-11-01 00:30:43.792 [INFO][5300] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:43.793565 containerd[1462]: time="2025-11-01T00:30:43.793489746Z" level=info msg="TearDown network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\" successfully" Nov 1 00:30:43.793565 containerd[1462]: time="2025-11-01T00:30:43.793523383Z" level=info msg="StopPodSandbox for \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\" returns successfully" Nov 1 00:30:43.797111 containerd[1462]: time="2025-11-01T00:30:43.796779381Z" level=info msg="RemovePodSandbox for \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\"" Nov 1 00:30:43.797111 containerd[1462]: time="2025-11-01T00:30:43.796837168Z" level=info msg="Forcibly stopping sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\"" Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.848 [WARNING][5321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"37d7183e-e34d-4dec-b261-c74c0840b2de", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-2d2c98a35867e77a5e84", ContainerID:"43b81bdf31cb04076fec62730efb7acebca61b6d8e5a1066c6227e127c535a42", Pod:"coredns-668d6bf9bc-pd8zf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4419f8cdd5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.848 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.848 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" iface="eth0" netns="" Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.848 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.848 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.876 [INFO][5328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" HandleID="k8s-pod-network.5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.876 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.877 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.886 [WARNING][5328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" HandleID="k8s-pod-network.5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.886 [INFO][5328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" HandleID="k8s-pod-network.5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Workload="ci--4081--3--6--nightly--20251031--2100--2d2c98a35867e77a5e84-k8s-coredns--668d6bf9bc--pd8zf-eth0" Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.888 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:30:43.891043 containerd[1462]: 2025-11-01 00:30:43.889 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8" Nov 1 00:30:43.891846 containerd[1462]: time="2025-11-01T00:30:43.891139024Z" level=info msg="TearDown network for sandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\" successfully" Nov 1 00:30:43.895895 containerd[1462]: time="2025-11-01T00:30:43.895848224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:30:43.896084 containerd[1462]: time="2025-11-01T00:30:43.895921983Z" level=info msg="RemovePodSandbox \"5759c1226c7238fc403f5987dcff08adfa1ad058d53243cd4743af6588d64eb8\" returns successfully" Nov 1 00:30:44.308273 containerd[1462]: time="2025-11-01T00:30:44.307950630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:30:44.520951 containerd[1462]: time="2025-11-01T00:30:44.520873604Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:44.522460 containerd[1462]: time="2025-11-01T00:30:44.522399171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:30:44.522688 containerd[1462]: time="2025-11-01T00:30:44.522412331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:30:44.522770 kubelet[2549]: E1101 00:30:44.522674 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:30:44.522770 kubelet[2549]: E1101 00:30:44.522743 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:30:44.523242 kubelet[2549]: E1101 00:30:44.522907 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjk5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9v2bt_calico-system(f2c53676-0b50-4c2c-9234-572240cab45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:44.525901 containerd[1462]: time="2025-11-01T00:30:44.525846464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:30:44.731218 containerd[1462]: time="2025-11-01T00:30:44.730973574Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:44.732416 containerd[1462]: time="2025-11-01T00:30:44.732345865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:30:44.732542 containerd[1462]: time="2025-11-01T00:30:44.732441631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:30:44.732633 kubelet[2549]: E1101 00:30:44.732584 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:30:44.732739 kubelet[2549]: E1101 00:30:44.732646 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:30:44.732875 kubelet[2549]: E1101 00:30:44.732812 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjk5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9v2bt_calico-system(f2c53676-0b50-4c2c-9234-572240cab45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:44.734722 kubelet[2549]: E1101 00:30:44.734654 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:45.304243 containerd[1462]: time="2025-11-01T00:30:45.304193203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:30:45.498865 containerd[1462]: time="2025-11-01T00:30:45.498792334Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:45.500329 containerd[1462]: time="2025-11-01T00:30:45.500175309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:30:45.500329 containerd[1462]: time="2025-11-01T00:30:45.500232395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:30:45.500557 kubelet[2549]: E1101 00:30:45.500454 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:30:45.500557 kubelet[2549]: E1101 00:30:45.500519 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:30:45.500827 kubelet[2549]: E1101 00:30:45.500703 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrrqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-749d6dfb67-b9g5c_calico-system(eb24b203-bba2-4a68-ac20-bbf747c87903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:45.502108 kubelet[2549]: E1101 00:30:45.502060 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:30:48.308057 containerd[1462]: time="2025-11-01T00:30:48.307968659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:30:48.504292 containerd[1462]: time="2025-11-01T00:30:48.504231553Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:48.506188 containerd[1462]: time="2025-11-01T00:30:48.505866452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:30:48.506188 containerd[1462]: time="2025-11-01T00:30:48.506035782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:30:48.507598 kubelet[2549]: E1101 00:30:48.506602 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:30:48.507598 kubelet[2549]: E1101 00:30:48.506681 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:30:48.507598 kubelet[2549]: E1101 00:30:48.506865 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhd88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb458b5b7-gxqtr_calico-apiserver(ce0ad95a-ccba-4cd4-91a4-5a94be968da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:48.508894 kubelet[2549]: E1101 00:30:48.508703 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:30:49.303605 containerd[1462]: time="2025-11-01T00:30:49.303280949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:30:49.500727 containerd[1462]: time="2025-11-01T00:30:49.500649894Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:49.502181 containerd[1462]: time="2025-11-01T00:30:49.502123410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:30:49.502372 containerd[1462]: time="2025-11-01T00:30:49.502140397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:30:49.502639 kubelet[2549]: E1101 00:30:49.502575 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:30:49.502737 kubelet[2549]: E1101 00:30:49.502645 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:30:49.503259 kubelet[2549]: E1101 00:30:49.503044 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9btkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb458b5b7-htf5s_calico-apiserver(f01ebb62-cbae-4771-a12a-33c798f125cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:49.504344 kubelet[2549]: E1101 00:30:49.504228 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:30:49.504467 containerd[1462]: time="2025-11-01T00:30:49.503288602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:30:49.698390 containerd[1462]: time="2025-11-01T00:30:49.698209402Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:30:49.699738 containerd[1462]: time="2025-11-01T00:30:49.699598698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:30:49.699738 containerd[1462]: time="2025-11-01T00:30:49.699657535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:30:49.700174 kubelet[2549]: E1101 00:30:49.700119 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:30:49.700675 kubelet[2549]: E1101 00:30:49.700189 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:30:49.700675 kubelet[2549]: E1101 00:30:49.700377 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p65gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cjssg_calico-system(73f63e7d-cd05-453e-9fac-681616f1563c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:30:49.701693 kubelet[2549]: E1101 00:30:49.701640 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:30:53.304244 kubelet[2549]: E1101 00:30:53.304129 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768465fd8d-cxghm" podUID="34a02912-f185-42ba-a75a-ca30896a4f61" Nov 1 00:30:58.306196 kubelet[2549]: E1101 00:30:58.306085 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:30:59.303436 kubelet[2549]: E1101 00:30:59.303376 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:31:00.267427 systemd[1]: Started sshd@7-10.128.0.44:22-147.75.109.163:37452.service - OpenSSH per-connection server daemon (147.75.109.163:37452). Nov 1 00:31:00.304929 kubelet[2549]: E1101 00:31:00.304862 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:31:00.562178 sshd[5356]: Accepted publickey for core from 147.75.109.163 port 37452 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:00.564222 sshd[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:00.570090 systemd-logind[1438]: New session 8 of user core. Nov 1 00:31:00.577243 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:31:00.879060 sshd[5356]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:00.884965 systemd[1]: sshd@7-10.128.0.44:22-147.75.109.163:37452.service: Deactivated successfully. Nov 1 00:31:00.888254 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:31:00.890934 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:31:00.892689 systemd-logind[1438]: Removed session 8. Nov 1 00:31:03.303319 kubelet[2549]: E1101 00:31:03.303248 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:31:04.303634 kubelet[2549]: E1101 00:31:04.303171 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:31:05.304377 containerd[1462]: time="2025-11-01T00:31:05.304318107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:31:05.505280 containerd[1462]: time="2025-11-01T00:31:05.505193704Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:05.506810 containerd[1462]: time="2025-11-01T00:31:05.506746729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:31:05.506968 containerd[1462]: time="2025-11-01T00:31:05.506777803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:31:05.507207 kubelet[2549]: E1101 00:31:05.507142 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:31:05.507739 kubelet[2549]: E1101 00:31:05.507210 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:31:05.507739 kubelet[2549]: E1101 00:31:05.507371 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9354361088054afe9becf34fc1077d69,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjccf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768465fd8d-cxghm_calico-system(34a02912-f185-42ba-a75a-ca30896a4f61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:05.510565 containerd[1462]: time="2025-11-01T00:31:05.510048434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:31:05.733620 containerd[1462]: time="2025-11-01T00:31:05.733448428Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:05.735122 containerd[1462]: time="2025-11-01T00:31:05.734970884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:31:05.735122 containerd[1462]: time="2025-11-01T00:31:05.735045827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:31:05.735695 kubelet[2549]: E1101 00:31:05.735270 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:31:05.735695 kubelet[2549]: E1101 00:31:05.735331 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:31:05.735695 kubelet[2549]: E1101 00:31:05.735552 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjccf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768465fd8d-cxghm_calico-system(34a02912-f185-42ba-a75a-ca30896a4f61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:05.737335 kubelet[2549]: E1101 00:31:05.737254 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768465fd8d-cxghm" podUID="34a02912-f185-42ba-a75a-ca30896a4f61" Nov 1 00:31:05.937365 systemd[1]: Started sshd@8-10.128.0.44:22-147.75.109.163:37458.service - OpenSSH per-connection server daemon (147.75.109.163:37458). Nov 1 00:31:06.230131 sshd[5391]: Accepted publickey for core from 147.75.109.163 port 37458 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:06.233711 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:06.243586 systemd-logind[1438]: New session 9 of user core. Nov 1 00:31:06.249498 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:31:06.524278 sshd[5391]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:06.530620 systemd[1]: sshd@8-10.128.0.44:22-147.75.109.163:37458.service: Deactivated successfully. Nov 1 00:31:06.534224 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:31:06.535753 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:31:06.537621 systemd-logind[1438]: Removed session 9. Nov 1 00:31:11.587420 systemd[1]: Started sshd@9-10.128.0.44:22-147.75.109.163:51862.service - OpenSSH per-connection server daemon (147.75.109.163:51862). Nov 1 00:31:11.887207 sshd[5405]: Accepted publickey for core from 147.75.109.163 port 51862 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:11.889049 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:11.900792 systemd-logind[1438]: New session 10 of user core. Nov 1 00:31:11.909218 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:31:12.182458 sshd[5405]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:12.187677 systemd[1]: sshd@9-10.128.0.44:22-147.75.109.163:51862.service: Deactivated successfully. Nov 1 00:31:12.191650 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:31:12.193975 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:31:12.195755 systemd-logind[1438]: Removed session 10. Nov 1 00:31:12.240419 systemd[1]: Started sshd@10-10.128.0.44:22-147.75.109.163:51872.service - OpenSSH per-connection server daemon (147.75.109.163:51872). Nov 1 00:31:12.532339 sshd[5419]: Accepted publickey for core from 147.75.109.163 port 51872 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:12.534417 sshd[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:12.540037 systemd-logind[1438]: New session 11 of user core. Nov 1 00:31:12.548211 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:31:12.912361 sshd[5419]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:12.922687 systemd[1]: sshd@10-10.128.0.44:22-147.75.109.163:51872.service: Deactivated successfully. Nov 1 00:31:12.924159 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:31:12.926990 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:31:12.930961 systemd-logind[1438]: Removed session 11. Nov 1 00:31:12.970485 systemd[1]: Started sshd@11-10.128.0.44:22-147.75.109.163:51876.service - OpenSSH per-connection server daemon (147.75.109.163:51876). Nov 1 00:31:13.276190 sshd[5430]: Accepted publickey for core from 147.75.109.163 port 51876 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:13.278178 sshd[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:13.285588 systemd-logind[1438]: New session 12 of user core. Nov 1 00:31:13.291265 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:31:13.305460 containerd[1462]: time="2025-11-01T00:31:13.304917739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:31:13.530625 containerd[1462]: time="2025-11-01T00:31:13.530448918Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:13.532811 containerd[1462]: time="2025-11-01T00:31:13.532182679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:31:13.532811 containerd[1462]: time="2025-11-01T00:31:13.532231403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:31:13.533071 kubelet[2549]: E1101 00:31:13.532524 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:31:13.533071 kubelet[2549]: E1101 00:31:13.532601 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:31:13.533071 kubelet[2549]: E1101 00:31:13.532870 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrrqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-749d6dfb67-b9g5c_calico-system(eb24b203-bba2-4a68-ac20-bbf747c87903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:13.535140 containerd[1462]: time="2025-11-01T00:31:13.534157783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:31:13.535308 kubelet[2549]: E1101 00:31:13.534303 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:31:13.566189 sshd[5430]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:13.571130 systemd[1]: sshd@11-10.128.0.44:22-147.75.109.163:51876.service: Deactivated successfully. Nov 1 00:31:13.575148 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:31:13.577685 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:31:13.579166 systemd-logind[1438]: Removed session 12. Nov 1 00:31:13.738739 containerd[1462]: time="2025-11-01T00:31:13.738638905Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:13.741080 containerd[1462]: time="2025-11-01T00:31:13.740570683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:31:13.741080 containerd[1462]: time="2025-11-01T00:31:13.740672068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:31:13.741496 kubelet[2549]: E1101 00:31:13.741425 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:31:13.742178 kubelet[2549]: E1101 00:31:13.741502 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:31:13.742178 kubelet[2549]: E1101 00:31:13.741701 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjk5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9v2bt_calico-system(f2c53676-0b50-4c2c-9234-572240cab45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:13.746177 containerd[1462]: time="2025-11-01T00:31:13.745036714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:31:13.945910 containerd[1462]: time="2025-11-01T00:31:13.945830730Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:13.947490 containerd[1462]: time="2025-11-01T00:31:13.947309156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:31:13.947490 containerd[1462]: time="2025-11-01T00:31:13.947423484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:31:13.947726 kubelet[2549]: E1101 00:31:13.947622 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:31:13.947726 kubelet[2549]: E1101 00:31:13.947689 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:31:13.947909 kubelet[2549]: E1101 00:31:13.947848 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjk5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9v2bt_calico-system(f2c53676-0b50-4c2c-9234-572240cab45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:13.949284 kubelet[2549]: E1101 00:31:13.949234 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:31:14.304825 containerd[1462]: time="2025-11-01T00:31:14.304326342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:31:14.514720 containerd[1462]: time="2025-11-01T00:31:14.514636138Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:14.516251 containerd[1462]: time="2025-11-01T00:31:14.516180408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:31:14.516393 containerd[1462]: time="2025-11-01T00:31:14.516294289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:31:14.516550 kubelet[2549]: E1101 00:31:14.516474 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:31:14.516550 kubelet[2549]: E1101 00:31:14.516542 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:31:14.516780 kubelet[2549]: E1101 00:31:14.516714 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhd88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb458b5b7-gxqtr_calico-apiserver(ce0ad95a-ccba-4cd4-91a4-5a94be968da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:14.518290 kubelet[2549]: E1101 00:31:14.518239 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:31:15.304442 containerd[1462]: time="2025-11-01T00:31:15.304219937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:31:15.503027 containerd[1462]: time="2025-11-01T00:31:15.502939189Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:15.504486 containerd[1462]: time="2025-11-01T00:31:15.504392612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:31:15.504863 containerd[1462]: time="2025-11-01T00:31:15.504454421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:31:15.504934 kubelet[2549]: E1101 00:31:15.504795 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:31:15.504934 kubelet[2549]: E1101 00:31:15.504866 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:31:15.505504 kubelet[2549]: E1101 00:31:15.505070 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9btkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb458b5b7-htf5s_calico-apiserver(f01ebb62-cbae-4771-a12a-33c798f125cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:15.506944 kubelet[2549]: E1101 00:31:15.506894 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:31:16.305410 containerd[1462]: time="2025-11-01T00:31:16.305269294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:31:16.504874 containerd[1462]: time="2025-11-01T00:31:16.504807510Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:16.506393 containerd[1462]: time="2025-11-01T00:31:16.506335847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:31:16.506625 containerd[1462]: time="2025-11-01T00:31:16.506351287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:31:16.506719 kubelet[2549]: E1101 00:31:16.506654 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:31:16.507434 kubelet[2549]: E1101 00:31:16.506720 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:31:16.507434 kubelet[2549]: E1101 00:31:16.506901 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p65gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cjssg_calico-system(73f63e7d-cd05-453e-9fac-681616f1563c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:16.508679 kubelet[2549]: E1101 00:31:16.508604 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:31:18.629493 systemd[1]: Started sshd@12-10.128.0.44:22-147.75.109.163:51882.service - OpenSSH per-connection server daemon (147.75.109.163:51882). Nov 1 00:31:18.929888 sshd[5451]: Accepted publickey for core from 147.75.109.163 port 51882 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:18.931869 sshd[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:18.938701 systemd-logind[1438]: New session 13 of user core. Nov 1 00:31:18.943223 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:31:19.273323 sshd[5451]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:19.278153 systemd[1]: sshd@12-10.128.0.44:22-147.75.109.163:51882.service: Deactivated successfully. Nov 1 00:31:19.281332 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:31:19.283868 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:31:19.285775 systemd-logind[1438]: Removed session 13. Nov 1 00:31:21.304819 kubelet[2549]: E1101 00:31:21.304711 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768465fd8d-cxghm" podUID="34a02912-f185-42ba-a75a-ca30896a4f61" Nov 1 00:31:24.330701 systemd[1]: Started sshd@13-10.128.0.44:22-147.75.109.163:38906.service - OpenSSH per-connection server daemon (147.75.109.163:38906). Nov 1 00:31:24.624346 sshd[5470]: Accepted publickey for core from 147.75.109.163 port 38906 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:24.626440 sshd[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:24.633902 systemd-logind[1438]: New session 14 of user core. Nov 1 00:31:24.638243 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:31:24.949245 sshd[5470]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:24.955353 systemd[1]: sshd@13-10.128.0.44:22-147.75.109.163:38906.service: Deactivated successfully. Nov 1 00:31:24.958472 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:31:24.960002 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:31:24.961588 systemd-logind[1438]: Removed session 14. Nov 1 00:31:26.304488 kubelet[2549]: E1101 00:31:26.304033 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:31:27.303846 kubelet[2549]: E1101 00:31:27.303768 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:31:27.305098 kubelet[2549]: E1101 00:31:27.304895 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:31:30.005503 systemd[1]: Started sshd@14-10.128.0.44:22-147.75.109.163:38920.service - OpenSSH per-connection server daemon (147.75.109.163:38920). Nov 1 00:31:30.299259 sshd[5485]: Accepted publickey for core from 147.75.109.163 port 38920 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:30.301479 sshd[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:30.310063 kubelet[2549]: E1101 00:31:30.308553 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:31:30.317040 systemd-logind[1438]: New session 15 of user core. Nov 1 00:31:30.321250 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:31:30.594369 sshd[5485]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:30.602514 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:31:30.603252 systemd[1]: sshd@14-10.128.0.44:22-147.75.109.163:38920.service: Deactivated successfully. Nov 1 00:31:30.607945 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:31:30.610470 systemd-logind[1438]: Removed session 15. Nov 1 00:31:31.303813 kubelet[2549]: E1101 00:31:31.303740 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:31:33.304586 kubelet[2549]: E1101 00:31:33.304519 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768465fd8d-cxghm" podUID="34a02912-f185-42ba-a75a-ca30896a4f61" Nov 1 00:31:35.649461 systemd[1]: Started sshd@15-10.128.0.44:22-147.75.109.163:37924.service - OpenSSH per-connection server daemon (147.75.109.163:37924). Nov 1 00:31:35.940420 sshd[5522]: Accepted publickey for core from 147.75.109.163 port 37924 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:35.942906 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:35.949674 systemd-logind[1438]: New session 16 of user core. Nov 1 00:31:35.957691 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:31:36.235117 sshd[5522]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:36.241483 systemd[1]: sshd@15-10.128.0.44:22-147.75.109.163:37924.service: Deactivated successfully. Nov 1 00:31:36.245128 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:31:36.246332 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:31:36.247821 systemd-logind[1438]: Removed session 16. Nov 1 00:31:36.294430 systemd[1]: Started sshd@16-10.128.0.44:22-147.75.109.163:37930.service - OpenSSH per-connection server daemon (147.75.109.163:37930). Nov 1 00:31:36.589750 sshd[5535]: Accepted publickey for core from 147.75.109.163 port 37930 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:36.591762 sshd[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:36.597852 systemd-logind[1438]: New session 17 of user core. Nov 1 00:31:36.608251 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:31:36.991710 sshd[5535]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:36.997694 systemd[1]: sshd@16-10.128.0.44:22-147.75.109.163:37930.service: Deactivated successfully. Nov 1 00:31:37.000764 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:31:37.001894 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:31:37.003621 systemd-logind[1438]: Removed session 17. Nov 1 00:31:37.053426 systemd[1]: Started sshd@17-10.128.0.44:22-147.75.109.163:37938.service - OpenSSH per-connection server daemon (147.75.109.163:37938). Nov 1 00:31:37.357504 sshd[5545]: Accepted publickey for core from 147.75.109.163 port 37938 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:37.358885 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:37.367098 systemd-logind[1438]: New session 18 of user core. Nov 1 00:31:37.374216 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:31:38.308675 kubelet[2549]: E1101 00:31:38.307125 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:31:38.312280 kubelet[2549]: E1101 00:31:38.311410 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:31:38.419522 sshd[5545]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:38.427317 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:31:38.429332 systemd[1]: sshd@17-10.128.0.44:22-147.75.109.163:37938.service: Deactivated successfully. Nov 1 00:31:38.434576 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:31:38.437988 systemd-logind[1438]: Removed session 18. Nov 1 00:31:38.478795 systemd[1]: Started sshd@18-10.128.0.44:22-147.75.109.163:37954.service - OpenSSH per-connection server daemon (147.75.109.163:37954). Nov 1 00:31:38.777107 sshd[5565]: Accepted publickey for core from 147.75.109.163 port 37954 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:38.782690 sshd[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:38.797516 systemd-logind[1438]: New session 19 of user core. Nov 1 00:31:38.802244 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:31:39.201425 sshd[5565]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:39.207598 systemd[1]: sshd@18-10.128.0.44:22-147.75.109.163:37954.service: Deactivated successfully. Nov 1 00:31:39.210592 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:31:39.212185 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:31:39.213674 systemd-logind[1438]: Removed session 19. Nov 1 00:31:39.261464 systemd[1]: Started sshd@19-10.128.0.44:22-147.75.109.163:37966.service - OpenSSH per-connection server daemon (147.75.109.163:37966). Nov 1 00:31:39.555256 sshd[5576]: Accepted publickey for core from 147.75.109.163 port 37966 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:39.558227 sshd[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:39.572461 systemd-logind[1438]: New session 20 of user core. Nov 1 00:31:39.581458 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:31:39.846896 sshd[5576]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:39.853548 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:31:39.854275 systemd[1]: sshd@19-10.128.0.44:22-147.75.109.163:37966.service: Deactivated successfully. Nov 1 00:31:39.857543 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:31:39.859240 systemd-logind[1438]: Removed session 20. Nov 1 00:31:41.303163 kubelet[2549]: E1101 00:31:41.303101 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:31:42.308406 kubelet[2549]: E1101 00:31:42.306838 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:31:42.308406 kubelet[2549]: E1101 00:31:42.306961 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:31:44.904418 systemd[1]: Started sshd@20-10.128.0.44:22-147.75.109.163:47566.service - OpenSSH per-connection server daemon (147.75.109.163:47566). Nov 1 00:31:45.188618 sshd[5591]: Accepted publickey for core from 147.75.109.163 port 47566 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:45.190669 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:45.196922 systemd-logind[1438]: New session 21 of user core. Nov 1 00:31:45.199249 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:31:45.507215 sshd[5591]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:45.513599 systemd[1]: sshd@20-10.128.0.44:22-147.75.109.163:47566.service: Deactivated successfully. Nov 1 00:31:45.517550 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:31:45.519006 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:31:45.520720 systemd-logind[1438]: Removed session 21. Nov 1 00:31:47.303801 containerd[1462]: time="2025-11-01T00:31:47.303465028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:31:47.481126 containerd[1462]: time="2025-11-01T00:31:47.481046477Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:47.482652 containerd[1462]: time="2025-11-01T00:31:47.482589661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:31:47.482872 containerd[1462]: time="2025-11-01T00:31:47.482697955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:31:47.482937 kubelet[2549]: E1101 00:31:47.482875 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:31:47.483449 kubelet[2549]: E1101 00:31:47.482938 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:31:47.483449 kubelet[2549]: E1101 00:31:47.483124 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9354361088054afe9becf34fc1077d69,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjccf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768465fd8d-cxghm_calico-system(34a02912-f185-42ba-a75a-ca30896a4f61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:47.486991 containerd[1462]: time="2025-11-01T00:31:47.486932838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:31:47.685757 containerd[1462]: time="2025-11-01T00:31:47.685584239Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:47.687166 containerd[1462]: time="2025-11-01T00:31:47.687102413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:31:47.687356 containerd[1462]: time="2025-11-01T00:31:47.687137842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:31:47.687630 kubelet[2549]: E1101 00:31:47.687568 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:31:47.687774 kubelet[2549]: E1101 00:31:47.687629 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:31:47.687861 kubelet[2549]: E1101 00:31:47.687796 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjccf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768465fd8d-cxghm_calico-system(34a02912-f185-42ba-a75a-ca30896a4f61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:47.689108 kubelet[2549]: E1101 00:31:47.689038 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768465fd8d-cxghm" podUID="34a02912-f185-42ba-a75a-ca30896a4f61" Nov 1 00:31:49.303988 kubelet[2549]: E1101 00:31:49.303913 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903" Nov 1 00:31:50.561661 systemd[1]: Started sshd@21-10.128.0.44:22-147.75.109.163:59054.service - OpenSSH per-connection server daemon (147.75.109.163:59054). Nov 1 00:31:50.853215 sshd[5606]: Accepted publickey for core from 147.75.109.163 port 59054 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:50.856129 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:50.870761 systemd-logind[1438]: New session 22 of user core. Nov 1 00:31:50.872271 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:31:51.133192 sshd[5606]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:51.140585 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:31:51.144752 systemd[1]: sshd@21-10.128.0.44:22-147.75.109.163:59054.service: Deactivated successfully. Nov 1 00:31:51.148971 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:31:51.151323 systemd-logind[1438]: Removed session 22. Nov 1 00:31:51.305599 kubelet[2549]: E1101 00:31:51.305534 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9v2bt" podUID="f2c53676-0b50-4c2c-9234-572240cab45e" Nov 1 00:31:52.306531 kubelet[2549]: E1101 00:31:52.306466 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:31:53.303470 kubelet[2549]: E1101 00:31:53.303369 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:31:54.305391 kubelet[2549]: E1101 00:31:54.305289 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-htf5s" podUID="f01ebb62-cbae-4771-a12a-33c798f125cd" Nov 1 00:31:56.191398 systemd[1]: Started sshd@22-10.128.0.44:22-147.75.109.163:59068.service - OpenSSH per-connection server daemon (147.75.109.163:59068). Nov 1 00:31:56.503498 sshd[5621]: Accepted publickey for core from 147.75.109.163 port 59068 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:31:56.507643 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:31:56.520190 systemd-logind[1438]: New session 23 of user core. Nov 1 00:31:56.526225 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:31:56.849464 sshd[5621]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:56.860782 systemd[1]: sshd@22-10.128.0.44:22-147.75.109.163:59068.service: Deactivated successfully. Nov 1 00:31:56.865872 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:31:56.867515 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:31:56.869998 systemd-logind[1438]: Removed session 23. Nov 1 00:31:58.308732 kubelet[2549]: E1101 00:31:58.308543 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768465fd8d-cxghm" podUID="34a02912-f185-42ba-a75a-ca30896a4f61" Nov 1 00:32:00.640470 systemd[1]: run-containerd-runc-k8s.io-203b2452f3738a3ba71741802b37dccb0e3d4f018f1673972edaaafc102eeb4c-runc.NImQDG.mount: Deactivated successfully. Nov 1 00:32:01.909961 systemd[1]: Started sshd@23-10.128.0.44:22-147.75.109.163:34392.service - OpenSSH per-connection server daemon (147.75.109.163:34392). Nov 1 00:32:02.230234 sshd[5660]: Accepted publickey for core from 147.75.109.163 port 34392 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:32:02.231864 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:32:02.246107 systemd-logind[1438]: New session 24 of user core. Nov 1 00:32:02.250043 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:32:02.601358 sshd[5660]: pam_unix(sshd:session): session closed for user core Nov 1 00:32:02.608890 systemd[1]: sshd@23-10.128.0.44:22-147.75.109.163:34392.service: Deactivated successfully. Nov 1 00:32:02.618601 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:32:02.622080 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:32:02.625690 systemd-logind[1438]: Removed session 24. Nov 1 00:32:03.305048 containerd[1462]: time="2025-11-01T00:32:03.304712122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:32:03.515659 containerd[1462]: time="2025-11-01T00:32:03.515293193Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:03.518801 containerd[1462]: time="2025-11-01T00:32:03.518733929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:32:03.518947 containerd[1462]: time="2025-11-01T00:32:03.518858247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:32:03.519156 kubelet[2549]: E1101 00:32:03.519095 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:32:03.519725 kubelet[2549]: E1101 00:32:03.519179 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:32:03.520308 kubelet[2549]: E1101 00:32:03.519364 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p65gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cjssg_calico-system(73f63e7d-cd05-453e-9fac-681616f1563c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:03.521500 kubelet[2549]: E1101 00:32:03.521352 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cjssg" podUID="73f63e7d-cd05-453e-9fac-681616f1563c" Nov 1 00:32:04.311092 containerd[1462]: time="2025-11-01T00:32:04.310774151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:32:04.505509 containerd[1462]: time="2025-11-01T00:32:04.505247340Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:04.506935 containerd[1462]: time="2025-11-01T00:32:04.506750539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:32:04.506935 containerd[1462]: time="2025-11-01T00:32:04.506883537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:32:04.509793 kubelet[2549]: E1101 00:32:04.507290 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:04.509793 kubelet[2549]: E1101 00:32:04.507353 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:04.509793 kubelet[2549]: E1101 00:32:04.507640 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhd88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb458b5b7-gxqtr_calico-apiserver(ce0ad95a-ccba-4cd4-91a4-5a94be968da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:04.510191 containerd[1462]: time="2025-11-01T00:32:04.509461770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:32:04.510525 kubelet[2549]: E1101 00:32:04.510481 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb458b5b7-gxqtr" podUID="ce0ad95a-ccba-4cd4-91a4-5a94be968da8" Nov 1 00:32:04.715310 containerd[1462]: time="2025-11-01T00:32:04.715158615Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:04.717752 containerd[1462]: time="2025-11-01T00:32:04.717662343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:32:04.717904 containerd[1462]: time="2025-11-01T00:32:04.717786844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:32:04.718169 kubelet[2549]: E1101 00:32:04.718080 2549 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:32:04.718676 kubelet[2549]: E1101 00:32:04.718262 2549 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:32:04.719917 kubelet[2549]: E1101 00:32:04.719731 2549 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrrqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-749d6dfb67-b9g5c_calico-system(eb24b203-bba2-4a68-ac20-bbf747c87903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:04.721647 kubelet[2549]: E1101 00:32:04.721537 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-749d6dfb67-b9g5c" podUID="eb24b203-bba2-4a68-ac20-bbf747c87903"