Nov 1 00:20:12.116146 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:20:12.116192 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:12.116210 kernel: BIOS-provided physical RAM map: Nov 1 00:20:12.116224 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 1 00:20:12.116237 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 1 00:20:12.116251 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 1 00:20:12.116267 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 1 00:20:12.116286 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 1 00:20:12.116301 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 1 00:20:12.116315 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 1 00:20:12.116329 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 1 00:20:12.116344 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 1 00:20:12.116358 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 1 00:20:12.116373 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 1 00:20:12.116396 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 1 00:20:12.116412 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 1 00:20:12.116428 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 1 00:20:12.116444 kernel: NX (Execute Disable) protection: active Nov 1 00:20:12.116460 kernel: APIC: Static calls initialized Nov 1 00:20:12.116476 kernel: efi: EFI v2.7 by EDK II Nov 1 00:20:12.116493 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 Nov 1 00:20:12.116509 kernel: SMBIOS 2.4 present. Nov 1 00:20:12.116526 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 1 00:20:12.116542 kernel: Hypervisor detected: KVM Nov 1 00:20:12.116562 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:20:12.116578 kernel: kvm-clock: using sched offset of 13389633731 cycles Nov 1 00:20:12.116605 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:20:12.116622 kernel: tsc: Detected 2299.998 MHz processor Nov 1 00:20:12.116638 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:20:12.116655 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:20:12.117226 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 1 00:20:12.117254 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 1 00:20:12.117273 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:20:12.117298 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 1 00:20:12.117315 kernel: Using GB pages for direct mapping Nov 1 00:20:12.117332 kernel: Secure boot disabled Nov 1 00:20:12.117349 kernel: ACPI: Early table checksum verification disabled Nov 1 00:20:12.117367 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 1 00:20:12.117384 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 1 00:20:12.117403 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 1 00:20:12.117427 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 1 00:20:12.117449 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 1 00:20:12.117466 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 1 00:20:12.117484 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 1 00:20:12.117501 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 1 00:20:12.117519 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 1 00:20:12.117536 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 1 00:20:12.117555 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 1 00:20:12.117572 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 1 00:20:12.117589 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 1 00:20:12.117615 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 1 00:20:12.117632 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 1 00:20:12.117960 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 1 00:20:12.117979 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 1 00:20:12.117997 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 1 00:20:12.118014 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 1 00:20:12.118037 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 1 00:20:12.118054 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:20:12.118071 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:20:12.118090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:20:12.118107 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 1 00:20:12.118124 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 1 00:20:12.118142 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 1 00:20:12.118160 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 1 00:20:12.118178 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 1 00:20:12.118199 kernel: Zone ranges: Nov 1 00:20:12.118217 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:20:12.118234 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:20:12.118251 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:20:12.118268 kernel: Movable zone start for each node Nov 1 00:20:12.118285 kernel: Early memory node ranges Nov 1 00:20:12.118302 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 1 00:20:12.118319 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 1 00:20:12.118337 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 1 00:20:12.118358 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 1 00:20:12.118375 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:20:12.118392 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 1 00:20:12.118409 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:20:12.118427 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 1 00:20:12.118444 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 1 00:20:12.118461 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 1 00:20:12.118478 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 1 00:20:12.118495 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 1 00:20:12.118512 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:20:12.118533 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:20:12.118551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:20:12.118568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:20:12.118585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:20:12.118611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:20:12.118628 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:20:12.118645 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:20:12.118675 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 1 00:20:12.118697 kernel: Booting paravirtualized kernel on KVM Nov 1 00:20:12.118715 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:20:12.118732 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:20:12.118791 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:20:12.118811 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:20:12.118828 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:20:12.118845 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:20:12.118862 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:20:12.118882 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:12.118907 kernel: random: crng init done Nov 1 00:20:12.118924 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 00:20:12.118943 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:20:12.118962 kernel: Fallback order for Node 0: 0 Nov 1 00:20:12.118980 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 1 00:20:12.118998 kernel: Policy zone: Normal Nov 1 00:20:12.119025 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:20:12.119042 kernel: software IO TLB: area num 2. Nov 1 00:20:12.119061 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 346940K reserved, 0K cma-reserved) Nov 1 00:20:12.119084 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:20:12.119101 kernel: Kernel/User page tables isolation: enabled Nov 1 00:20:12.119119 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:20:12.119137 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:20:12.119156 kernel: Dynamic Preempt: voluntary Nov 1 00:20:12.119174 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:20:12.119199 kernel: rcu: RCU event tracing is enabled. Nov 1 00:20:12.119219 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:20:12.119255 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:20:12.119273 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:20:12.119291 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:20:12.119308 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:20:12.119331 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:20:12.119349 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:20:12.119366 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:20:12.119385 kernel: Console: colour dummy device 80x25 Nov 1 00:20:12.119407 kernel: printk: console [ttyS0] enabled Nov 1 00:20:12.119426 kernel: ACPI: Core revision 20230628 Nov 1 00:20:12.119446 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:20:12.119465 kernel: x2apic enabled Nov 1 00:20:12.119485 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:20:12.119504 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 1 00:20:12.119524 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:20:12.119544 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 1 00:20:12.119563 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 1 00:20:12.119587 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 1 00:20:12.119615 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:20:12.119635 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 00:20:12.119655 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 00:20:12.119699 kernel: Spectre V2 : Mitigation: IBRS Nov 1 00:20:12.119716 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:20:12.119733 kernel: RETBleed: Mitigation: IBRS Nov 1 00:20:12.119752 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:20:12.119770 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 1 00:20:12.119796 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:20:12.119814 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:20:12.119834 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:20:12.119854 kernel: active return thunk: its_return_thunk Nov 1 00:20:12.119870 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:20:12.119888 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:20:12.119905 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:20:12.119923 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:20:12.119941 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:20:12.119967 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:20:12.119988 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:20:12.120015 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:20:12.120032 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:20:12.121727 kernel: landlock: Up and running. Nov 1 00:20:12.121747 kernel: SELinux: Initializing. Nov 1 00:20:12.121764 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:20:12.121782 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:20:12.121801 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 1 00:20:12.121828 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:12.121847 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:12.121864 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:12.121881 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 1 00:20:12.121899 kernel: signal: max sigframe size: 1776 Nov 1 00:20:12.121918 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:20:12.121937 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:20:12.121956 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:20:12.121975 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:20:12.121999 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:20:12.122017 kernel: .... node #0, CPUs: #1 Nov 1 00:20:12.122037 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 1 00:20:12.122057 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:20:12.122075 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:20:12.122094 kernel: smpboot: Max logical packages: 1 Nov 1 00:20:12.122112 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 1 00:20:12.122131 kernel: devtmpfs: initialized Nov 1 00:20:12.122155 kernel: x86/mm: Memory block size: 128MB Nov 1 00:20:12.122175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 1 00:20:12.122195 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:20:12.122214 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:20:12.122234 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:20:12.122252 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:20:12.122271 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:20:12.122289 kernel: audit: type=2000 audit(1761956410.303:1): state=initialized audit_enabled=0 res=1 Nov 1 00:20:12.122308 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:20:12.122331 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:20:12.122350 kernel: cpuidle: using governor menu Nov 1 00:20:12.122369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:20:12.122387 kernel: dca service started, version 1.12.1 Nov 1 00:20:12.122407 kernel: PCI: Using configuration type 1 for base access Nov 1 00:20:12.122427 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:20:12.122447 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:20:12.122465 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:20:12.122483 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:20:12.122506 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:20:12.122525 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:20:12.122543 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:20:12.122563 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:20:12.122582 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 1 00:20:12.122610 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:20:12.122629 kernel: ACPI: Interpreter enabled Nov 1 00:20:12.122649 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:20:12.122701 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:20:12.122725 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:20:12.122744 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 1 00:20:12.122763 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 1 00:20:12.122781 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:20:12.123049 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:20:12.123253 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 1 00:20:12.123451 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 1 00:20:12.123481 kernel: PCI host bridge to bus 0000:00 Nov 1 00:20:12.125725 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:20:12.125935 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:20:12.126127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:20:12.126301 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 1 00:20:12.126467 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:20:12.126723 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:20:12.126933 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 1 00:20:12.127135 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:20:12.127317 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 1 00:20:12.127506 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 1 00:20:12.129821 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 1 00:20:12.130077 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 1 00:20:12.130297 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:20:12.130495 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 1 00:20:12.130944 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 1 00:20:12.131187 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:20:12.131405 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 1 00:20:12.131607 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 1 00:20:12.131634 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:20:12.131662 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:20:12.131936 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:20:12.131958 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:20:12.131978 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:20:12.131998 kernel: iommu: Default domain type: Translated Nov 1 00:20:12.132018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:20:12.132038 kernel: efivars: Registered efivars operations Nov 1 00:20:12.132059 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:20:12.132080 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:20:12.132099 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 1 00:20:12.132125 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 1 00:20:12.132145 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 1 00:20:12.132165 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 1 00:20:12.132184 kernel: vgaarb: loaded Nov 1 00:20:12.132204 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:20:12.132222 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:20:12.132242 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:20:12.132263 kernel: pnp: PnP ACPI init Nov 1 00:20:12.132283 kernel: pnp: PnP ACPI: found 7 devices Nov 1 00:20:12.132308 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:20:12.132328 kernel: NET: Registered PF_INET protocol family Nov 1 00:20:12.132348 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:20:12.132368 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 00:20:12.132389 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:20:12.132409 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:20:12.132428 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 00:20:12.132449 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 00:20:12.132473 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:20:12.132493 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:20:12.132513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:20:12.132533 kernel: NET: Registered PF_XDP protocol family Nov 1 00:20:12.132753 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:20:12.132949 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:20:12.133155 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:20:12.133333 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 1 00:20:12.133542 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:20:12.133568 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:20:12.133587 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:20:12.133607 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 1 00:20:12.133626 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:20:12.133646 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:20:12.133689 kernel: clocksource: Switched to clocksource tsc Nov 1 00:20:12.133710 kernel: Initialise system trusted keyrings Nov 1 00:20:12.133736 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 00:20:12.133755 kernel: Key type asymmetric registered Nov 1 00:20:12.133774 kernel: Asymmetric key parser 'x509' registered Nov 1 00:20:12.133793 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:20:12.133812 kernel: io scheduler mq-deadline registered Nov 1 00:20:12.133832 kernel: io scheduler kyber registered Nov 1 00:20:12.133852 kernel: io scheduler bfq registered Nov 1 00:20:12.133871 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:20:12.133891 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:20:12.134105 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 1 00:20:12.134130 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 1 00:20:12.134318 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 1 00:20:12.134343 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:20:12.134529 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 1 00:20:12.134553 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:20:12.134572 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:20:12.134592 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:20:12.134612 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 1 00:20:12.134636 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 1 00:20:12.134848 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 1 00:20:12.134875 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:20:12.134895 kernel: i8042: Warning: Keylock active Nov 1 00:20:12.134914 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:20:12.134940 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:20:12.135128 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 1 00:20:12.135312 kernel: rtc_cmos 00:00: registered as rtc0 Nov 1 00:20:12.135502 kernel: rtc_cmos 00:00: setting system clock to 2025-11-01T00:20:11 UTC (1761956411) Nov 1 00:20:12.135717 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 1 00:20:12.135742 kernel: intel_pstate: CPU model not supported Nov 1 00:20:12.135762 kernel: pstore: Using crash dump compression: deflate Nov 1 00:20:12.135781 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:20:12.135800 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:20:12.135819 kernel: Segment Routing with IPv6 Nov 1 00:20:12.135838 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:20:12.135863 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:20:12.135883 kernel: Key type dns_resolver registered Nov 1 00:20:12.135902 kernel: IPI shorthand broadcast: enabled Nov 1 00:20:12.135921 kernel: sched_clock: Marking stable (909005933, 178969774)->(1179696519, -91720812) Nov 1 00:20:12.135948 kernel: registered taskstats version 1 Nov 1 00:20:12.135967 kernel: Loading compiled-in X.509 certificates Nov 1 00:20:12.135986 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:20:12.136006 kernel: Key type .fscrypt registered Nov 1 00:20:12.136024 kernel: Key type fscrypt-provisioning registered Nov 1 00:20:12.136047 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:20:12.136066 kernel: ima: No architecture policies found Nov 1 00:20:12.136086 kernel: clk: Disabling unused clocks Nov 1 00:20:12.136105 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:20:12.136124 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:20:12.136144 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 1 00:20:12.136164 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:20:12.136183 kernel: Run /init as init process Nov 1 00:20:12.136203 kernel: with arguments: Nov 1 00:20:12.136226 kernel: /init Nov 1 00:20:12.136245 kernel: with environment: Nov 1 00:20:12.136264 kernel: HOME=/ Nov 1 00:20:12.136283 kernel: TERM=linux Nov 1 00:20:12.136306 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:20:12.136329 systemd[1]: Detected virtualization google. Nov 1 00:20:12.136349 systemd[1]: Detected architecture x86-64. Nov 1 00:20:12.136373 systemd[1]: Running in initrd. Nov 1 00:20:12.136392 systemd[1]: No hostname configured, using default hostname. Nov 1 00:20:12.136412 systemd[1]: Hostname set to . Nov 1 00:20:12.136433 systemd[1]: Initializing machine ID from random generator. Nov 1 00:20:12.136453 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:20:12.136472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:12.136493 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:12.136514 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:20:12.136538 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:20:12.136559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:20:12.136579 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:20:12.136602 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:20:12.136623 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:20:12.136643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:12.136674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:12.136700 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:20:12.136720 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:20:12.136760 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:20:12.136785 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:20:12.136806 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:12.136827 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:12.136852 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:20:12.136873 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:20:12.136894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:12.136915 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:12.136943 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:12.136964 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:20:12.136985 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:20:12.137006 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:20:12.137027 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:20:12.137051 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:20:12.137072 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:20:12.137094 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:20:12.137115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:12.137178 systemd-journald[183]: Collecting audit messages is disabled. Nov 1 00:20:12.137229 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:12.137250 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:12.137271 systemd-journald[183]: Journal started Nov 1 00:20:12.137313 systemd-journald[183]: Runtime Journal (/run/log/journal/5fbd200d931d42428bb1d9cb1c41bc41) is 8.0M, max 148.7M, 140.7M free. Nov 1 00:20:12.140423 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:20:12.140425 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:20:12.150148 systemd-modules-load[184]: Inserted module 'overlay' Nov 1 00:20:12.155910 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:20:12.168953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:20:12.171276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:12.183309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:20:12.197755 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:20:12.201013 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:12.202174 kernel: Bridge firewalling registered Nov 1 00:20:12.201480 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 1 00:20:12.204891 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:20:12.205357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:12.210162 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:20:12.227784 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:12.241549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:12.249187 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:12.254088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:12.264007 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:20:12.268203 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:20:12.286055 dracut-cmdline[216]: dracut-dracut-053 Nov 1 00:20:12.290504 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:12.340158 systemd-resolved[217]: Positive Trust Anchors: Nov 1 00:20:12.340179 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:12.340250 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:20:12.347412 systemd-resolved[217]: Defaulting to hostname 'linux'. Nov 1 00:20:12.351537 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:20:12.367558 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:12.403716 kernel: SCSI subsystem initialized Nov 1 00:20:12.416717 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:20:12.429802 kernel: iscsi: registered transport (tcp) Nov 1 00:20:12.455029 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:20:12.455119 kernel: QLogic iSCSI HBA Driver Nov 1 00:20:12.509351 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:12.517960 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:20:12.560926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:20:12.561012 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:20:12.561041 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:20:12.607737 kernel: raid6: avx2x4 gen() 18024 MB/s Nov 1 00:20:12.624758 kernel: raid6: avx2x2 gen() 17978 MB/s Nov 1 00:20:12.642777 kernel: raid6: avx2x1 gen() 13905 MB/s Nov 1 00:20:12.642878 kernel: raid6: using algorithm avx2x4 gen() 18024 MB/s Nov 1 00:20:12.661514 kernel: raid6: .... xor() 6554 MB/s, rmw enabled Nov 1 00:20:12.661571 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:20:12.685715 kernel: xor: automatically using best checksumming function avx Nov 1 00:20:12.868708 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:20:12.883404 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:12.893946 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:12.935780 systemd-udevd[400]: Using default interface naming scheme 'v255'. Nov 1 00:20:12.942804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:12.973930 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:20:13.015171 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Nov 1 00:20:13.054064 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:13.080971 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:20:13.180556 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:13.191051 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:20:13.253535 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:13.274638 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:13.288277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:13.320346 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:20:13.351011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:20:13.403219 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:20:13.403029 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:20:13.422864 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 1 00:20:13.440053 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:20:13.440147 kernel: AES CTR mode by8 optimization enabled Nov 1 00:20:13.450999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:13.451205 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:13.496953 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 1 00:20:13.497305 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 1 00:20:13.497621 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 1 00:20:13.498985 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 1 00:20:13.499216 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:20:13.530029 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:20:13.530109 kernel: GPT:17805311 != 33554431 Nov 1 00:20:13.530134 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:20:13.530158 kernel: GPT:17805311 != 33554431 Nov 1 00:20:13.530181 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:20:13.530206 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:13.544262 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:13.561868 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 1 00:20:13.573824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:13.574233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:13.604848 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:13.646881 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (461) Nov 1 00:20:13.646926 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (452) Nov 1 00:20:13.643080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:13.657573 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:13.697732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:13.736036 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 1 00:20:13.748168 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 1 00:20:13.776130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 1 00:20:13.787086 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 1 00:20:13.802089 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 1 00:20:13.831188 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:20:13.865895 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:13.895071 disk-uuid[539]: Primary Header is updated. Nov 1 00:20:13.895071 disk-uuid[539]: Secondary Entries is updated. Nov 1 00:20:13.895071 disk-uuid[539]: Secondary Header is updated. Nov 1 00:20:13.915095 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:13.928703 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:13.946694 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:13.956636 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:14.944429 disk-uuid[541]: The operation has completed successfully. Nov 1 00:20:14.952866 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:15.027936 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:20:15.028104 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:20:15.070971 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:20:15.090967 sh[566]: Success Nov 1 00:20:15.104879 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:20:15.198137 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:20:15.205575 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:20:15.229301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:20:15.285385 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:20:15.285476 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:15.285519 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:20:15.294858 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:20:15.301694 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:20:15.344729 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:20:15.351696 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:20:15.352709 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:20:15.357934 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:20:15.380609 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:20:15.470126 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:15.470179 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:15.470205 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:15.470229 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:20:15.470254 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:15.470288 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:15.461056 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:20:15.479359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:20:15.497948 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:20:15.585228 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:15.593010 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:20:15.704362 systemd-networkd[749]: lo: Link UP Nov 1 00:20:15.704834 systemd-networkd[749]: lo: Gained carrier Nov 1 00:20:15.705926 ignition[673]: Ignition 2.19.0 Nov 1 00:20:15.707280 systemd-networkd[749]: Enumeration completed Nov 1 00:20:15.705939 ignition[673]: Stage: fetch-offline Nov 1 00:20:15.707433 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:20:15.706019 ignition[673]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:15.708406 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:15.706038 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:15.708413 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:15.706420 ignition[673]: parsed url from cmdline: "" Nov 1 00:20:15.710887 systemd-networkd[749]: eth0: Link UP Nov 1 00:20:15.706428 ignition[673]: no config URL provided Nov 1 00:20:15.710893 systemd-networkd[749]: eth0: Gained carrier Nov 1 00:20:15.706440 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:15.710904 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:15.706464 ignition[673]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:15.727244 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:15.706479 ignition[673]: failed to fetch config: resource requires networking Nov 1 00:20:15.730788 systemd-networkd[749]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:20:15.706943 ignition[673]: Ignition finished successfully Nov 1 00:20:15.730804 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.8/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 1 00:20:15.800314 ignition[757]: Ignition 2.19.0 Nov 1 00:20:15.742744 systemd[1]: Reached target network.target - Network. Nov 1 00:20:15.800323 ignition[757]: Stage: fetch Nov 1 00:20:15.764917 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:20:15.800541 ignition[757]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:15.810882 unknown[757]: fetched base config from "system" Nov 1 00:20:15.800556 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:15.810906 unknown[757]: fetched base config from "system" Nov 1 00:20:15.800726 ignition[757]: parsed url from cmdline: "" Nov 1 00:20:15.810919 unknown[757]: fetched user config from "gcp" Nov 1 00:20:15.800734 ignition[757]: no config URL provided Nov 1 00:20:15.834323 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:20:15.800744 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:15.857934 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:20:15.800759 ignition[757]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:15.882636 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:20:15.800789 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 1 00:20:15.890905 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:20:15.805305 ignition[757]: GET result: OK Nov 1 00:20:15.948246 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:20:15.805416 ignition[757]: parsing config with SHA512: ec853cf2b35ff770574752c5fd26101b8041d1024ed16d63ba5e3d0a02347038233870b6e6f5583af4b6a6b2898fc6b3a77d71d7352c37e66950c3abff102e43 Nov 1 00:20:15.984284 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:15.811836 ignition[757]: fetch: fetch complete Nov 1 00:20:16.001847 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:20:15.811855 ignition[757]: fetch: fetch passed Nov 1 00:20:16.024902 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:20:15.811928 ignition[757]: Ignition finished successfully Nov 1 00:20:16.038905 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:20:15.879981 ignition[764]: Ignition 2.19.0 Nov 1 00:20:16.054875 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:20:15.879990 ignition[764]: Stage: kargs Nov 1 00:20:16.074913 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:20:15.880183 ignition[764]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:15.880195 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:15.881348 ignition[764]: kargs: kargs passed Nov 1 00:20:15.881409 ignition[764]: Ignition finished successfully Nov 1 00:20:15.935524 ignition[769]: Ignition 2.19.0 Nov 1 00:20:15.935534 ignition[769]: Stage: disks Nov 1 00:20:15.935784 ignition[769]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:15.935797 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:15.937181 ignition[769]: disks: disks passed Nov 1 00:20:15.937250 ignition[769]: Ignition finished successfully Nov 1 00:20:16.121365 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 1 00:20:16.330561 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:20:16.363877 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:20:16.488992 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:20:16.489874 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:20:16.490775 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:20:16.522853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:16.527634 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:20:16.557439 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:20:16.557542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:20:16.645022 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (786) Nov 1 00:20:16.645071 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:16.645088 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:16.645118 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:16.645132 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:20:16.645148 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:16.557585 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:16.629047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:16.654064 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:20:16.679914 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:20:16.806033 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:20:16.817068 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:20:16.826921 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:20:16.836872 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:20:16.882858 systemd-networkd[749]: eth0: Gained IPv6LL Nov 1 00:20:16.982518 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:17.012878 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:20:17.040871 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:17.038877 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:20:17.059345 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:20:17.083617 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:20:17.098580 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:20:17.116859 ignition[899]: INFO : Ignition 2.19.0 Nov 1 00:20:17.116859 ignition[899]: INFO : Stage: mount Nov 1 00:20:17.116859 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:17.116859 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:17.116859 ignition[899]: INFO : mount: mount passed Nov 1 00:20:17.116859 ignition[899]: INFO : Ignition finished successfully Nov 1 00:20:17.115818 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:20:17.127045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:17.206705 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (910) Nov 1 00:20:17.224618 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:17.224715 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:17.224741 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:17.248096 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:20:17.248198 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:17.251827 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:17.290164 ignition[927]: INFO : Ignition 2.19.0 Nov 1 00:20:17.290164 ignition[927]: INFO : Stage: files Nov 1 00:20:17.305845 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:17.305845 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:17.305845 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:20:17.305845 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:20:17.305845 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:20:17.305845 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:20:17.305845 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:20:17.305845 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:20:17.305845 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:20:17.305845 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:20:17.303421 unknown[927]: wrote ssh authorized keys file for user: core Nov 1 00:20:17.503210 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:20:17.803493 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:20:18.351607 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:20:19.150351 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:20:19.150351 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:20:19.169037 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:20:19.169037 ignition[927]: INFO : files: files passed Nov 1 00:20:19.169037 ignition[927]: INFO : Ignition finished successfully Nov 1 00:20:19.156547 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:20:19.208899 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:20:19.244905 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:20:19.254480 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:20:19.388960 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:19.254602 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:20:19.423881 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:19.423881 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:19.339223 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:20:19.360842 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:20:19.385917 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:20:19.468377 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:20:19.468507 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:20:19.483729 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:20:19.503988 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:20:19.512207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:20:19.519054 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:20:19.606754 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:20:19.625026 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:20:19.661479 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:19.661805 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:19.693162 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:20:19.712039 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:20:19.712262 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:20:19.740103 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:20:19.761037 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:20:19.779168 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:20:19.797068 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:19.818174 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:19.840059 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:20:19.860139 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:19.881081 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:20:19.902137 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:20:19.922136 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:20:19.941067 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:20:19.941292 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:19.967149 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:19.987088 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:20.008031 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:20:20.008239 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:20.029986 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:20:20.030213 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:20.062179 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:20:20.062418 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:20:20.082178 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:20:20.082385 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:20:20.107976 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:20:20.159875 ignition[980]: INFO : Ignition 2.19.0 Nov 1 00:20:20.159875 ignition[980]: INFO : Stage: umount Nov 1 00:20:20.159875 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:20.159875 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:20.159875 ignition[980]: INFO : umount: umount passed Nov 1 00:20:20.159875 ignition[980]: INFO : Ignition finished successfully Nov 1 00:20:20.120823 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:20:20.121126 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:20.178290 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:20:20.207028 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:20:20.207267 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:20.238216 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:20:20.238406 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:20.274295 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:20:20.275443 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:20:20.275562 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:20:20.280550 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:20:20.280677 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:20:20.299313 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:20:20.299443 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:20:20.316312 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:20:20.316376 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:20:20.342066 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:20:20.342148 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:20:20.352193 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:20:20.352263 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:20:20.368150 systemd[1]: Stopped target network.target - Network. Nov 1 00:20:20.394027 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:20:20.394135 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:20.412954 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:20:20.429848 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:20:20.429934 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:20.430035 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:20:20.456944 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:20:20.472932 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:20:20.473022 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:20.493924 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:20:20.494027 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:20.511924 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:20:20.512048 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:20:20.531949 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:20:20.532075 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:20.549983 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:20:20.550097 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:20.568201 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:20:20.570759 systemd-networkd[749]: eth0: DHCPv6 lease lost Nov 1 00:20:20.587084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:20:20.606357 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:20:20.606509 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:20:20.625378 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:20:20.625755 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:20:20.645710 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:20:20.645812 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:20.670834 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:20:20.674020 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:20:20.674103 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:20.722923 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:20:20.723019 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:20.740930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:20:20.741059 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:20.759933 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:20:20.760051 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:20.780114 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:20.793219 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:20:20.793450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:20.828394 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:20:21.212811 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 1 00:20:20.828518 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:20:20.847425 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:20:20.847505 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:20.866179 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:20:20.866255 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:20.887024 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:20:20.887133 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:20.917212 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:20:20.917299 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:20.960903 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:20.961152 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:21.013958 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:20:21.036831 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:20:21.036954 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:21.056984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:21.057091 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:21.079431 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:20:21.079554 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:20:21.099197 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:20:21.123906 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:20:21.161432 systemd[1]: Switching root. Nov 1 00:20:21.421870 systemd-journald[183]: Journal stopped Nov 1 00:20:12.116146 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:20:12.116192 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:12.116210 kernel: BIOS-provided physical RAM map: Nov 1 00:20:12.116224 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 1 00:20:12.116237 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 1 00:20:12.116251 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 1 00:20:12.116267 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 1 00:20:12.116286 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 1 00:20:12.116301 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 1 00:20:12.116315 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 1 00:20:12.116329 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 1 00:20:12.116344 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 1 00:20:12.116358 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 1 00:20:12.116373 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 1 00:20:12.116396 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 1 00:20:12.116412 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 1 00:20:12.116428 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 1 00:20:12.116444 kernel: NX (Execute Disable) protection: active Nov 1 00:20:12.116460 kernel: APIC: Static calls initialized Nov 1 00:20:12.116476 kernel: efi: EFI v2.7 by EDK II Nov 1 00:20:12.116493 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 Nov 1 00:20:12.116509 kernel: SMBIOS 2.4 present. Nov 1 00:20:12.116526 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 1 00:20:12.116542 kernel: Hypervisor detected: KVM Nov 1 00:20:12.116562 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:20:12.116578 kernel: kvm-clock: using sched offset of 13389633731 cycles Nov 1 00:20:12.116605 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:20:12.116622 kernel: tsc: Detected 2299.998 MHz processor Nov 1 00:20:12.116638 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:20:12.116655 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:20:12.117226 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 1 00:20:12.117254 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 1 00:20:12.117273 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:20:12.117298 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 1 00:20:12.117315 kernel: Using GB pages for direct mapping Nov 1 00:20:12.117332 kernel: Secure boot disabled Nov 1 00:20:12.117349 kernel: ACPI: Early table checksum verification disabled Nov 1 00:20:12.117367 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 1 00:20:12.117384 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 1 00:20:12.117403 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 1 00:20:12.117427 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 1 00:20:12.117449 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 1 00:20:12.117466 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 1 00:20:12.117484 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 1 00:20:12.117501 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 1 00:20:12.117519 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 1 00:20:12.117536 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 1 00:20:12.117555 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 1 00:20:12.117572 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 1 00:20:12.117589 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 1 00:20:12.117615 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 1 00:20:12.117632 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 1 00:20:12.117960 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 1 00:20:12.117979 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 1 00:20:12.117997 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 1 00:20:12.118014 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 1 00:20:12.118037 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 1 00:20:12.118054 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:20:12.118071 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:20:12.118090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:20:12.118107 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 1 00:20:12.118124 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 1 00:20:12.118142 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 1 00:20:12.118160 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 1 00:20:12.118178 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 1 00:20:12.118199 kernel: Zone ranges: Nov 1 00:20:12.118217 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:20:12.118234 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:20:12.118251 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:20:12.118268 kernel: Movable zone start for each node Nov 1 00:20:12.118285 kernel: Early memory node ranges Nov 1 00:20:12.118302 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 1 00:20:12.118319 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 1 00:20:12.118337 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 1 00:20:12.118358 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 1 00:20:12.118375 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:20:12.118392 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 1 00:20:12.118409 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:20:12.118427 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 1 00:20:12.118444 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 1 00:20:12.118461 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 1 00:20:12.118478 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 1 00:20:12.118495 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 1 00:20:12.118512 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:20:12.118533 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:20:12.118551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:20:12.118568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:20:12.118585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:20:12.118611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:20:12.118628 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:20:12.118645 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:20:12.118675 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 1 00:20:12.118697 kernel: Booting paravirtualized kernel on KVM Nov 1 00:20:12.118715 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:20:12.118732 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:20:12.118791 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:20:12.118811 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:20:12.118828 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:20:12.118845 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:20:12.118862 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:20:12.118882 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:12.118907 kernel: random: crng init done Nov 1 00:20:12.118924 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 00:20:12.118943 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:20:12.118962 kernel: Fallback order for Node 0: 0 Nov 1 00:20:12.118980 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 1 00:20:12.118998 kernel: Policy zone: Normal Nov 1 00:20:12.119025 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:20:12.119042 kernel: software IO TLB: area num 2. Nov 1 00:20:12.119061 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 346940K reserved, 0K cma-reserved) Nov 1 00:20:12.119084 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:20:12.119101 kernel: Kernel/User page tables isolation: enabled Nov 1 00:20:12.119119 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:20:12.119137 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:20:12.119156 kernel: Dynamic Preempt: voluntary Nov 1 00:20:12.119174 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:20:12.119199 kernel: rcu: RCU event tracing is enabled. Nov 1 00:20:12.119219 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:20:12.119255 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:20:12.119273 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:20:12.119291 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:20:12.119308 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:20:12.119331 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:20:12.119349 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:20:12.119366 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:20:12.119385 kernel: Console: colour dummy device 80x25 Nov 1 00:20:12.119407 kernel: printk: console [ttyS0] enabled Nov 1 00:20:12.119426 kernel: ACPI: Core revision 20230628 Nov 1 00:20:12.119446 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:20:12.119465 kernel: x2apic enabled Nov 1 00:20:12.119485 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:20:12.119504 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 1 00:20:12.119524 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:20:12.119544 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 1 00:20:12.119563 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 1 00:20:12.119587 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 1 00:20:12.119615 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:20:12.119635 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 00:20:12.119655 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 00:20:12.119699 kernel: Spectre V2 : Mitigation: IBRS Nov 1 00:20:12.119716 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:20:12.119733 kernel: RETBleed: Mitigation: IBRS Nov 1 00:20:12.119752 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:20:12.119770 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 1 00:20:12.119796 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:20:12.119814 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:20:12.119834 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:20:12.119854 kernel: active return thunk: its_return_thunk Nov 1 00:20:12.119870 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:20:12.119888 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:20:12.119905 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:20:12.119923 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:20:12.119941 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:20:12.119967 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:20:12.119988 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:20:12.120015 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:20:12.120032 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:20:12.121727 kernel: landlock: Up and running. Nov 1 00:20:12.121747 kernel: SELinux: Initializing. Nov 1 00:20:12.121764 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:20:12.121782 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:20:12.121801 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 1 00:20:12.121828 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:12.121847 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:12.121864 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:12.121881 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 1 00:20:12.121899 kernel: signal: max sigframe size: 1776 Nov 1 00:20:12.121918 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:20:12.121937 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:20:12.121956 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:20:12.121975 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:20:12.121999 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:20:12.122017 kernel: .... node #0, CPUs: #1 Nov 1 00:20:12.122037 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 1 00:20:12.122057 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:20:12.122075 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:20:12.122094 kernel: smpboot: Max logical packages: 1 Nov 1 00:20:12.122112 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 1 00:20:12.122131 kernel: devtmpfs: initialized Nov 1 00:20:12.122155 kernel: x86/mm: Memory block size: 128MB Nov 1 00:20:12.122175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 1 00:20:12.122195 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:20:12.122214 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:20:12.122234 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:20:12.122252 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:20:12.122271 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:20:12.122289 kernel: audit: type=2000 audit(1761956410.303:1): state=initialized audit_enabled=0 res=1 Nov 1 00:20:12.122308 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:20:12.122331 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:20:12.122350 kernel: cpuidle: using governor menu Nov 1 00:20:12.122369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:20:12.122387 kernel: dca service started, version 1.12.1 Nov 1 00:20:12.122407 kernel: PCI: Using configuration type 1 for base access Nov 1 00:20:12.122427 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:20:12.122447 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:20:12.122465 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:20:12.122483 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:20:12.122506 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:20:12.122525 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:20:12.122543 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:20:12.122563 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:20:12.122582 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 1 00:20:12.122610 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:20:12.122629 kernel: ACPI: Interpreter enabled Nov 1 00:20:12.122649 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:20:12.122701 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:20:12.122725 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:20:12.122744 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 1 00:20:12.122763 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 1 00:20:12.122781 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:20:12.123049 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:20:12.123253 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 1 00:20:12.123451 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 1 00:20:12.123481 kernel: PCI host bridge to bus 0000:00 Nov 1 00:20:12.125725 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:20:12.125935 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:20:12.126127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:20:12.126301 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 1 00:20:12.126467 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:20:12.126723 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:20:12.126933 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 1 00:20:12.127135 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:20:12.127317 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 1 00:20:12.127506 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 1 00:20:12.129821 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 1 00:20:12.130077 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 1 00:20:12.130297 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:20:12.130495 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 1 00:20:12.130944 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 1 00:20:12.131187 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:20:12.131405 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 1 00:20:12.131607 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 1 00:20:12.131634 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:20:12.131662 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:20:12.131936 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:20:12.131958 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:20:12.131978 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:20:12.131998 kernel: iommu: Default domain type: Translated Nov 1 00:20:12.132018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:20:12.132038 kernel: efivars: Registered efivars operations Nov 1 00:20:12.132059 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:20:12.132080 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:20:12.132099 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 1 00:20:12.132125 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 1 00:20:12.132145 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 1 00:20:12.132165 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 1 00:20:12.132184 kernel: vgaarb: loaded Nov 1 00:20:12.132204 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:20:12.132222 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:20:12.132242 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:20:12.132263 kernel: pnp: PnP ACPI init Nov 1 00:20:12.132283 kernel: pnp: PnP ACPI: found 7 devices Nov 1 00:20:12.132308 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:20:12.132328 kernel: NET: Registered PF_INET protocol family Nov 1 00:20:12.132348 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:20:12.132368 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 00:20:12.132389 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:20:12.132409 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:20:12.132428 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 00:20:12.132449 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 00:20:12.132473 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:20:12.132493 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:20:12.132513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:20:12.132533 kernel: NET: Registered PF_XDP protocol family Nov 1 00:20:12.132753 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:20:12.132949 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:20:12.133155 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:20:12.133333 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 1 00:20:12.133542 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:20:12.133568 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:20:12.133587 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:20:12.133607 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 1 00:20:12.133626 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:20:12.133646 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:20:12.133689 kernel: clocksource: Switched to clocksource tsc Nov 1 00:20:12.133710 kernel: Initialise system trusted keyrings Nov 1 00:20:12.133736 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 00:20:12.133755 kernel: Key type asymmetric registered Nov 1 00:20:12.133774 kernel: Asymmetric key parser 'x509' registered Nov 1 00:20:12.133793 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:20:12.133812 kernel: io scheduler mq-deadline registered Nov 1 00:20:12.133832 kernel: io scheduler kyber registered Nov 1 00:20:12.133852 kernel: io scheduler bfq registered Nov 1 00:20:12.133871 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:20:12.133891 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:20:12.134105 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 1 00:20:12.134130 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 1 00:20:12.134318 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 1 00:20:12.134343 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:20:12.134529 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 1 00:20:12.134553 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:20:12.134572 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:20:12.134592 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:20:12.134612 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 1 00:20:12.134636 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 1 00:20:12.134848 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 1 00:20:12.134875 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:20:12.134895 kernel: i8042: Warning: Keylock active Nov 1 00:20:12.134914 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:20:12.134940 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:20:12.135128 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 1 00:20:12.135312 kernel: rtc_cmos 00:00: registered as rtc0 Nov 1 00:20:12.135502 kernel: rtc_cmos 00:00: setting system clock to 2025-11-01T00:20:11 UTC (1761956411) Nov 1 00:20:12.135717 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 1 00:20:12.135742 kernel: intel_pstate: CPU model not supported Nov 1 00:20:12.135762 kernel: pstore: Using crash dump compression: deflate Nov 1 00:20:12.135781 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:20:12.135800 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:20:12.135819 kernel: Segment Routing with IPv6 Nov 1 00:20:12.135838 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:20:12.135863 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:20:12.135883 kernel: Key type dns_resolver registered Nov 1 00:20:12.135902 kernel: IPI shorthand broadcast: enabled Nov 1 00:20:12.135921 kernel: sched_clock: Marking stable (909005933, 178969774)->(1179696519, -91720812) Nov 1 00:20:12.135948 kernel: registered taskstats version 1 Nov 1 00:20:12.135967 kernel: Loading compiled-in X.509 certificates Nov 1 00:20:12.135986 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:20:12.136006 kernel: Key type .fscrypt registered Nov 1 00:20:12.136024 kernel: Key type fscrypt-provisioning registered Nov 1 00:20:12.136047 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:20:12.136066 kernel: ima: No architecture policies found Nov 1 00:20:12.136086 kernel: clk: Disabling unused clocks Nov 1 00:20:12.136105 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:20:12.136124 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:20:12.136144 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 1 00:20:12.136164 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:20:12.136183 kernel: Run /init as init process Nov 1 00:20:12.136203 kernel: with arguments: Nov 1 00:20:12.136226 kernel: /init Nov 1 00:20:12.136245 kernel: with environment: Nov 1 00:20:12.136264 kernel: HOME=/ Nov 1 00:20:12.136283 kernel: TERM=linux Nov 1 00:20:12.136306 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:20:12.136329 systemd[1]: Detected virtualization google. Nov 1 00:20:12.136349 systemd[1]: Detected architecture x86-64. Nov 1 00:20:12.136373 systemd[1]: Running in initrd. Nov 1 00:20:12.136392 systemd[1]: No hostname configured, using default hostname. Nov 1 00:20:12.136412 systemd[1]: Hostname set to . Nov 1 00:20:12.136433 systemd[1]: Initializing machine ID from random generator. Nov 1 00:20:12.136453 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:20:12.136472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:12.136493 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:12.136514 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:20:12.136538 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:20:12.136559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:20:12.136579 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:20:12.136602 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:20:12.136623 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:20:12.136643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:12.136674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:12.136700 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:20:12.136720 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:20:12.136760 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:20:12.136785 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:20:12.136806 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:12.136827 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:12.136852 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:20:12.136873 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:20:12.136894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:12.136915 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:12.136943 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:12.136964 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:20:12.136985 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:20:12.137006 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:20:12.137027 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:20:12.137051 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:20:12.137072 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:20:12.137094 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:20:12.137115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:12.137178 systemd-journald[183]: Collecting audit messages is disabled. Nov 1 00:20:12.137229 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:12.137250 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:12.137271 systemd-journald[183]: Journal started Nov 1 00:20:12.137313 systemd-journald[183]: Runtime Journal (/run/log/journal/5fbd200d931d42428bb1d9cb1c41bc41) is 8.0M, max 148.7M, 140.7M free. Nov 1 00:20:12.140423 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:20:12.140425 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:20:12.150148 systemd-modules-load[184]: Inserted module 'overlay' Nov 1 00:20:12.155910 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:20:12.168953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:20:12.171276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:12.183309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:20:12.197755 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:20:12.201013 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:12.202174 kernel: Bridge firewalling registered Nov 1 00:20:12.201480 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 1 00:20:12.204891 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:20:12.205357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:12.210162 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:20:12.227784 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:12.241549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:12.249187 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:12.254088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:12.264007 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:20:12.268203 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:20:12.286055 dracut-cmdline[216]: dracut-dracut-053 Nov 1 00:20:12.290504 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:12.340158 systemd-resolved[217]: Positive Trust Anchors: Nov 1 00:20:12.340179 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:12.340250 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:20:12.347412 systemd-resolved[217]: Defaulting to hostname 'linux'. Nov 1 00:20:12.351537 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:20:12.367558 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:12.403716 kernel: SCSI subsystem initialized Nov 1 00:20:12.416717 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:20:12.429802 kernel: iscsi: registered transport (tcp) Nov 1 00:20:12.455029 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:20:12.455119 kernel: QLogic iSCSI HBA Driver Nov 1 00:20:12.509351 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:12.517960 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:20:12.560926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:20:12.561012 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:20:12.561041 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:20:12.607737 kernel: raid6: avx2x4 gen() 18024 MB/s Nov 1 00:20:12.624758 kernel: raid6: avx2x2 gen() 17978 MB/s Nov 1 00:20:12.642777 kernel: raid6: avx2x1 gen() 13905 MB/s Nov 1 00:20:12.642878 kernel: raid6: using algorithm avx2x4 gen() 18024 MB/s Nov 1 00:20:12.661514 kernel: raid6: .... xor() 6554 MB/s, rmw enabled Nov 1 00:20:12.661571 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:20:12.685715 kernel: xor: automatically using best checksumming function avx Nov 1 00:20:12.868708 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:20:12.883404 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:12.893946 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:12.935780 systemd-udevd[400]: Using default interface naming scheme 'v255'. Nov 1 00:20:12.942804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:12.973930 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:20:13.015171 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Nov 1 00:20:13.054064 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:13.080971 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:20:13.180556 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:13.191051 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:20:13.253535 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:13.274638 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:13.288277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:13.320346 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:20:13.351011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:20:13.403219 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:20:13.403029 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:20:13.422864 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 1 00:20:13.440053 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:20:13.440147 kernel: AES CTR mode by8 optimization enabled Nov 1 00:20:13.450999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:13.451205 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:13.496953 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 1 00:20:13.497305 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 1 00:20:13.497621 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 1 00:20:13.498985 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 1 00:20:13.499216 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:20:13.530029 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:20:13.530109 kernel: GPT:17805311 != 33554431 Nov 1 00:20:13.530134 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:20:13.530158 kernel: GPT:17805311 != 33554431 Nov 1 00:20:13.530181 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:20:13.530206 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:13.544262 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:13.561868 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 1 00:20:13.573824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:13.574233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:13.604848 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:13.646881 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (461) Nov 1 00:20:13.646926 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (452) Nov 1 00:20:13.643080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:13.657573 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:13.697732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:13.736036 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 1 00:20:13.748168 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 1 00:20:13.776130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 1 00:20:13.787086 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 1 00:20:13.802089 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 1 00:20:13.831188 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:20:13.865895 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:13.895071 disk-uuid[539]: Primary Header is updated. Nov 1 00:20:13.895071 disk-uuid[539]: Secondary Entries is updated. Nov 1 00:20:13.895071 disk-uuid[539]: Secondary Header is updated. Nov 1 00:20:13.915095 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:13.928703 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:13.946694 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:13.956636 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:14.944429 disk-uuid[541]: The operation has completed successfully. Nov 1 00:20:14.952866 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:15.027936 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:20:15.028104 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:20:15.070971 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:20:15.090967 sh[566]: Success Nov 1 00:20:15.104879 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:20:15.198137 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:20:15.205575 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:20:15.229301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:20:15.285385 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:20:15.285476 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:15.285519 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:20:15.294858 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:20:15.301694 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:20:15.344729 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:20:15.351696 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:20:15.352709 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:20:15.357934 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:20:15.380609 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:20:15.470126 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:15.470179 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:15.470205 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:15.470229 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:20:15.470254 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:15.470288 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:15.461056 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:20:15.479359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:20:15.497948 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:20:15.585228 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:15.593010 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:20:15.704362 systemd-networkd[749]: lo: Link UP Nov 1 00:20:15.704834 systemd-networkd[749]: lo: Gained carrier Nov 1 00:20:15.705926 ignition[673]: Ignition 2.19.0 Nov 1 00:20:15.707280 systemd-networkd[749]: Enumeration completed Nov 1 00:20:15.705939 ignition[673]: Stage: fetch-offline Nov 1 00:20:15.707433 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:20:15.706019 ignition[673]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:15.708406 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:15.706038 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:15.708413 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:15.706420 ignition[673]: parsed url from cmdline: "" Nov 1 00:20:15.710887 systemd-networkd[749]: eth0: Link UP Nov 1 00:20:15.706428 ignition[673]: no config URL provided Nov 1 00:20:15.710893 systemd-networkd[749]: eth0: Gained carrier Nov 1 00:20:15.706440 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:15.710904 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:15.706464 ignition[673]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:15.727244 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:15.706479 ignition[673]: failed to fetch config: resource requires networking Nov 1 00:20:15.730788 systemd-networkd[749]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:20:15.706943 ignition[673]: Ignition finished successfully Nov 1 00:20:15.730804 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.8/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 1 00:20:15.800314 ignition[757]: Ignition 2.19.0 Nov 1 00:20:15.742744 systemd[1]: Reached target network.target - Network. Nov 1 00:20:15.800323 ignition[757]: Stage: fetch Nov 1 00:20:15.764917 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:20:15.800541 ignition[757]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:15.810882 unknown[757]: fetched base config from "system" Nov 1 00:20:15.800556 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:15.810906 unknown[757]: fetched base config from "system" Nov 1 00:20:15.800726 ignition[757]: parsed url from cmdline: "" Nov 1 00:20:15.810919 unknown[757]: fetched user config from "gcp" Nov 1 00:20:15.800734 ignition[757]: no config URL provided Nov 1 00:20:15.834323 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:20:15.800744 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:15.857934 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:20:15.800759 ignition[757]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:15.882636 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:20:15.800789 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 1 00:20:15.890905 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:20:15.805305 ignition[757]: GET result: OK Nov 1 00:20:15.948246 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:20:15.805416 ignition[757]: parsing config with SHA512: ec853cf2b35ff770574752c5fd26101b8041d1024ed16d63ba5e3d0a02347038233870b6e6f5583af4b6a6b2898fc6b3a77d71d7352c37e66950c3abff102e43 Nov 1 00:20:15.984284 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:15.811836 ignition[757]: fetch: fetch complete Nov 1 00:20:16.001847 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:20:15.811855 ignition[757]: fetch: fetch passed Nov 1 00:20:16.024902 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:20:15.811928 ignition[757]: Ignition finished successfully Nov 1 00:20:16.038905 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:20:15.879981 ignition[764]: Ignition 2.19.0 Nov 1 00:20:16.054875 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:20:15.879990 ignition[764]: Stage: kargs Nov 1 00:20:16.074913 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:20:15.880183 ignition[764]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:15.880195 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:15.881348 ignition[764]: kargs: kargs passed Nov 1 00:20:15.881409 ignition[764]: Ignition finished successfully Nov 1 00:20:15.935524 ignition[769]: Ignition 2.19.0 Nov 1 00:20:15.935534 ignition[769]: Stage: disks Nov 1 00:20:15.935784 ignition[769]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:15.935797 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:15.937181 ignition[769]: disks: disks passed Nov 1 00:20:15.937250 ignition[769]: Ignition finished successfully Nov 1 00:20:16.121365 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 1 00:20:16.330561 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:20:16.363877 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:20:16.488992 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:20:16.489874 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:20:16.490775 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:20:16.522853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:16.527634 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:20:16.557439 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:20:16.557542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:20:16.645022 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (786) Nov 1 00:20:16.645071 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:16.645088 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:16.645118 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:16.645132 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:20:16.645148 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:16.557585 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:16.629047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:16.654064 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:20:16.679914 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:20:16.806033 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:20:16.817068 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:20:16.826921 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:20:16.836872 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:20:16.882858 systemd-networkd[749]: eth0: Gained IPv6LL Nov 1 00:20:16.982518 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:17.012878 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:20:17.040871 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:17.038877 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:20:17.059345 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:20:17.083617 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:20:17.098580 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:20:17.116859 ignition[899]: INFO : Ignition 2.19.0 Nov 1 00:20:17.116859 ignition[899]: INFO : Stage: mount Nov 1 00:20:17.116859 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:17.116859 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:17.116859 ignition[899]: INFO : mount: mount passed Nov 1 00:20:17.116859 ignition[899]: INFO : Ignition finished successfully Nov 1 00:20:17.115818 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:20:17.127045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:17.206705 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (910) Nov 1 00:20:17.224618 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:17.224715 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:17.224741 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:17.248096 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:20:17.248198 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:17.251827 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:17.290164 ignition[927]: INFO : Ignition 2.19.0 Nov 1 00:20:17.290164 ignition[927]: INFO : Stage: files Nov 1 00:20:17.305845 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:17.305845 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:17.305845 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:20:17.305845 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:20:17.305845 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:20:17.305845 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:20:17.305845 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:20:17.305845 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:20:17.305845 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:20:17.305845 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:20:17.303421 unknown[927]: wrote ssh authorized keys file for user: core Nov 1 00:20:17.503210 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:20:17.803493 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:20:17.820843 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:20:18.351607 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:20:19.150351 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:20:19.150351 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:20:19.169037 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:20:19.169037 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:20:19.169037 ignition[927]: INFO : files: files passed Nov 1 00:20:19.169037 ignition[927]: INFO : Ignition finished successfully Nov 1 00:20:19.156547 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:20:19.208899 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:20:19.244905 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:20:19.254480 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:20:19.388960 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:19.254602 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:20:19.423881 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:19.423881 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:19.339223 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:20:19.360842 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:20:19.385917 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:20:19.468377 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:20:19.468507 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:20:19.483729 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:20:19.503988 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:20:19.512207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:20:19.519054 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:20:19.606754 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:20:19.625026 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:20:19.661479 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:19.661805 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:19.693162 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:20:19.712039 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:20:19.712262 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:20:19.740103 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:20:19.761037 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:20:19.779168 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:20:19.797068 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:19.818174 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:19.840059 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:20:19.860139 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:19.881081 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:20:19.902137 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:20:19.922136 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:20:19.941067 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:20:19.941292 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:19.967149 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:19.987088 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:20.008031 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:20:20.008239 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:20.029986 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:20:20.030213 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:20.062179 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:20:20.062418 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:20:20.082178 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:20:20.082385 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:20:20.107976 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:20:20.159875 ignition[980]: INFO : Ignition 2.19.0 Nov 1 00:20:20.159875 ignition[980]: INFO : Stage: umount Nov 1 00:20:20.159875 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:20.159875 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:20:20.159875 ignition[980]: INFO : umount: umount passed Nov 1 00:20:20.159875 ignition[980]: INFO : Ignition finished successfully Nov 1 00:20:20.120823 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:20:20.121126 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:20.178290 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:20:20.207028 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:20:20.207267 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:20.238216 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:20:20.238406 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:20.274295 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:20:20.275443 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:20:20.275562 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:20:20.280550 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:20:20.280677 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:20:20.299313 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:20:20.299443 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:20:20.316312 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:20:20.316376 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:20:20.342066 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:20:20.342148 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:20:20.352193 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:20:20.352263 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:20:20.368150 systemd[1]: Stopped target network.target - Network. Nov 1 00:20:20.394027 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:20:20.394135 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:20.412954 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:20:20.429848 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:20:20.429934 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:20.430035 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:20:20.456944 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:20:20.472932 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:20:20.473022 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:20.493924 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:20:20.494027 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:20.511924 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:20:20.512048 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:20:20.531949 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:20:20.532075 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:20.549983 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:20:20.550097 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:20.568201 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:20:20.570759 systemd-networkd[749]: eth0: DHCPv6 lease lost Nov 1 00:20:20.587084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:20:20.606357 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:20:20.606509 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:20:20.625378 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:20:20.625755 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:20:20.645710 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:20:20.645812 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:20.670834 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:20:20.674020 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:20:20.674103 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:20.722923 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:20:20.723019 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:20.740930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:20:20.741059 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:20.759933 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:20:20.760051 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:20.780114 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:20.793219 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:20:20.793450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:20.828394 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:20:21.212811 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 1 00:20:20.828518 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:20:20.847425 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:20:20.847505 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:20.866179 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:20:20.866255 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:20.887024 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:20:20.887133 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:20.917212 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:20:20.917299 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:20.960903 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:20.961152 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:21.013958 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:20:21.036831 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:20:21.036954 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:21.056984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:21.057091 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:21.079431 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:20:21.079554 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:20:21.099197 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:20:21.123906 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:20:21.161432 systemd[1]: Switching root. Nov 1 00:20:21.421870 systemd-journald[183]: Journal stopped Nov 1 00:20:24.054454 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:20:24.054525 kernel: SELinux: policy capability open_perms=1 Nov 1 00:20:24.054550 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:20:24.054570 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:20:24.054587 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:20:24.054607 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:20:24.054629 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:20:24.054654 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:20:24.054694 kernel: audit: type=1403 audit(1761956421.839:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:20:24.054718 systemd[1]: Successfully loaded SELinux policy in 87.613ms. Nov 1 00:20:24.054741 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.529ms. Nov 1 00:20:24.054764 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:20:24.054786 systemd[1]: Detected virtualization google. Nov 1 00:20:24.054806 systemd[1]: Detected architecture x86-64. Nov 1 00:20:24.054835 systemd[1]: Detected first boot. Nov 1 00:20:24.054858 systemd[1]: Initializing machine ID from random generator. Nov 1 00:20:24.054879 zram_generator::config[1021]: No configuration found. Nov 1 00:20:24.054903 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:20:24.054925 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:20:24.054951 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:20:24.054974 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:20:24.054997 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:20:24.055019 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:20:24.055042 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:20:24.055064 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:20:24.055087 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:20:24.055115 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:20:24.055138 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:20:24.055159 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:20:24.055182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:24.055205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:24.055227 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:20:24.055250 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:20:24.055273 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:20:24.055300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:20:24.055324 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:20:24.055346 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:24.055368 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:20:24.055527 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:20:24.055551 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:20:24.055582 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:20:24.055608 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:24.055631 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:20:24.055660 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:20:24.055711 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:20:24.055734 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:20:24.055758 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:20:24.055781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:24.055804 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:24.055828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:24.055859 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:20:24.055882 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:20:24.055907 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:20:24.055930 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:20:24.055955 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:24.055983 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:20:24.056007 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:20:24.056031 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:20:24.056056 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:20:24.056079 systemd[1]: Reached target machines.target - Containers. Nov 1 00:20:24.056103 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:20:24.056128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:24.056153 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:20:24.056182 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:20:24.056206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:24.056230 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:20:24.056254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:24.056278 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:20:24.056302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:24.056326 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:20:24.056351 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:20:24.056380 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:20:24.056402 kernel: ACPI: bus type drm_connector registered Nov 1 00:20:24.056422 kernel: fuse: init (API version 7.39) Nov 1 00:20:24.056443 kernel: loop: module loaded Nov 1 00:20:24.056462 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:20:24.056497 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:20:24.056520 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:20:24.056542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:20:24.056564 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:20:24.056592 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:20:24.056650 systemd-journald[1108]: Collecting audit messages is disabled. Nov 1 00:20:24.056712 systemd-journald[1108]: Journal started Nov 1 00:20:24.056919 systemd-journald[1108]: Runtime Journal (/run/log/journal/ca6732d46a29498681b410cfaf2429ca) is 8.0M, max 148.7M, 140.7M free. Nov 1 00:20:22.783370 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:20:22.805510 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 1 00:20:22.806155 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:20:24.076722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:20:24.099389 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:20:24.099496 systemd[1]: Stopped verity-setup.service. Nov 1 00:20:24.135570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:24.135724 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:20:24.147309 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:20:24.157074 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:20:24.167098 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:20:24.177088 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:20:24.187052 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:20:24.197057 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:20:24.207273 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:20:24.219217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:24.231254 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:20:24.231565 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:20:24.243275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:24.243591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:24.255252 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:20:24.255525 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:20:24.267225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:24.267474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:24.279240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:20:24.279534 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:20:24.290251 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:24.290505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:24.301269 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:24.311236 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:20:24.323255 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:20:24.335264 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:24.360764 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:20:24.379872 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:20:24.391292 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:20:24.400867 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:20:24.400937 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:20:24.412789 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:20:24.435969 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:20:24.452912 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:20:24.463043 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:24.472331 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:20:24.485595 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:20:24.496875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:20:24.506710 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:20:24.516887 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:20:24.525999 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:20:24.542294 systemd-journald[1108]: Time spent on flushing to /var/log/journal/ca6732d46a29498681b410cfaf2429ca is 101.577ms for 927 entries. Nov 1 00:20:24.542294 systemd-journald[1108]: System Journal (/var/log/journal/ca6732d46a29498681b410cfaf2429ca) is 8.0M, max 584.8M, 576.8M free. Nov 1 00:20:24.738347 systemd-journald[1108]: Received client request to flush runtime journal. Nov 1 00:20:24.738440 kernel: loop0: detected capacity change from 0 to 142488 Nov 1 00:20:24.542382 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:20:24.570920 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:20:24.586907 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:20:24.604080 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:20:24.615542 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:20:24.627261 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:20:24.639454 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:20:24.657290 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:24.680236 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:20:24.702958 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:20:24.724134 udevadm[1142]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:20:24.741213 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:20:24.782551 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:20:24.794515 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:20:24.800164 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:20:24.827491 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 00:20:24.830427 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:20:24.851969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:20:24.930539 kernel: loop2: detected capacity change from 0 to 140768 Nov 1 00:20:24.945304 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Nov 1 00:20:24.945747 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Nov 1 00:20:24.960017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:25.035782 kernel: loop3: detected capacity change from 0 to 54824 Nov 1 00:20:25.123148 kernel: loop4: detected capacity change from 0 to 142488 Nov 1 00:20:25.179733 kernel: loop5: detected capacity change from 0 to 219144 Nov 1 00:20:25.243724 kernel: loop6: detected capacity change from 0 to 140768 Nov 1 00:20:25.308709 kernel: loop7: detected capacity change from 0 to 54824 Nov 1 00:20:25.335212 (sd-merge)[1163]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Nov 1 00:20:25.336224 (sd-merge)[1163]: Merged extensions into '/usr'. Nov 1 00:20:25.351244 systemd[1]: Reloading requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:20:25.351270 systemd[1]: Reloading... Nov 1 00:20:25.494777 zram_generator::config[1186]: No configuration found. Nov 1 00:20:25.706524 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:20:25.792307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:20:25.889285 systemd[1]: Reloading finished in 536 ms. Nov 1 00:20:25.918818 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:20:25.929404 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:20:25.941275 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:20:25.961945 systemd[1]: Starting ensure-sysext.service... Nov 1 00:20:25.979931 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:20:25.998994 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:26.015862 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:20:26.015886 systemd[1]: Reloading... Nov 1 00:20:26.025275 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:20:26.026014 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:20:26.027934 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:20:26.028520 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 1 00:20:26.028654 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 1 00:20:26.036588 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:20:26.038719 systemd-tmpfiles[1232]: Skipping /boot Nov 1 00:20:26.073741 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Nov 1 00:20:26.077026 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:20:26.077048 systemd-tmpfiles[1232]: Skipping /boot Nov 1 00:20:26.160857 zram_generator::config[1256]: No configuration found. Nov 1 00:20:26.485227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:20:26.487717 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 1 00:20:26.500692 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 00:20:26.522745 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 00:20:26.541740 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1261) Nov 1 00:20:26.552725 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:20:26.577708 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Nov 1 00:20:26.604695 kernel: ACPI: button: Sleep Button [SLPF] Nov 1 00:20:26.639015 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:20:26.640101 systemd[1]: Reloading finished in 623 ms. Nov 1 00:20:26.667340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:26.688302 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:26.769717 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:20:26.774248 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:26.776793 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:20:26.787039 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:20:26.805653 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:20:26.817075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:26.825118 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:26.840641 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:26.861048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:26.871017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:26.877806 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:20:26.890726 augenrules[1353]: No rules Nov 1 00:20:26.893301 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:20:26.913650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:20:26.931737 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:20:26.943861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:26.953216 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:20:26.963953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:26.964345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:26.976534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:26.977011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:26.989504 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:26.989768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:27.000616 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:20:27.012575 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:20:27.050608 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 1 00:20:27.067656 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:20:27.079637 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:20:27.093983 systemd[1]: Finished ensure-sysext.service. Nov 1 00:20:27.106910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:27.107199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:27.111900 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:20:27.133886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:27.154918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:20:27.157699 lvm[1370]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:20:27.171898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:27.199508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:27.221895 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 1 00:20:27.230927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:27.243008 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:20:27.255889 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:20:27.264903 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:20:27.292035 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:20:27.311041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:27.321804 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:20:27.321858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:27.324556 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:20:27.336366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:27.336630 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:27.337263 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:20:27.337521 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:20:27.338013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:27.338225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:27.338734 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:27.338955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:27.348175 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:20:27.348978 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:20:27.351560 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:20:27.369116 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 1 00:20:27.378256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:27.391525 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:20:27.395057 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Nov 1 00:20:27.395175 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:20:27.395287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:20:27.423316 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:20:27.474164 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:20:27.501340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:27.525023 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Nov 1 00:20:27.531952 systemd-networkd[1357]: lo: Link UP Nov 1 00:20:27.532523 systemd-networkd[1357]: lo: Gained carrier Nov 1 00:20:27.536045 systemd-networkd[1357]: Enumeration completed Nov 1 00:20:27.536275 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:20:27.537111 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:27.537239 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:27.538214 systemd-networkd[1357]: eth0: Link UP Nov 1 00:20:27.538329 systemd-networkd[1357]: eth0: Gained carrier Nov 1 00:20:27.538357 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:27.548454 systemd-resolved[1359]: Positive Trust Anchors: Nov 1 00:20:27.548487 systemd-resolved[1359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:27.548553 systemd-resolved[1359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:20:27.549488 systemd-networkd[1357]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:20:27.549513 systemd-networkd[1357]: eth0: DHCPv4 address 10.128.0.8/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 1 00:20:27.555921 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:20:27.556700 systemd-resolved[1359]: Defaulting to hostname 'linux'. Nov 1 00:20:27.566972 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:20:27.577969 systemd[1]: Reached target network.target - Network. Nov 1 00:20:27.586891 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:27.597852 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:20:27.607998 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:20:27.618949 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:20:27.630127 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:20:27.640023 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:20:27.650889 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:20:27.661861 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:20:27.661923 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:20:27.670850 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:20:27.680454 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:20:27.692676 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:20:27.710647 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:20:27.721943 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:20:27.732080 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:20:27.741861 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:20:27.750897 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:20:27.750945 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:20:27.760908 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:20:27.772597 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:20:27.793003 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:20:27.814862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:20:27.841936 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:20:27.851821 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:20:27.856167 jq[1423]: false Nov 1 00:20:27.861925 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:20:27.884757 systemd[1]: Started ntpd.service - Network Time Service. Nov 1 00:20:27.903838 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:20:27.910043 coreos-metadata[1421]: Nov 01 00:20:27.909 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Nov 1 00:20:27.910634 coreos-metadata[1421]: Nov 01 00:20:27.910 INFO Fetch successful Nov 1 00:20:27.910634 coreos-metadata[1421]: Nov 01 00:20:27.910 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Nov 1 00:20:27.910634 coreos-metadata[1421]: Nov 01 00:20:27.910 INFO Fetch successful Nov 1 00:20:27.910634 coreos-metadata[1421]: Nov 01 00:20:27.910 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Nov 1 00:20:27.910634 coreos-metadata[1421]: Nov 01 00:20:27.910 INFO Fetch successful Nov 1 00:20:27.910634 coreos-metadata[1421]: Nov 01 00:20:27.910 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Nov 1 00:20:27.910634 coreos-metadata[1421]: Nov 01 00:20:27.910 INFO Fetch successful Nov 1 00:20:27.925005 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:20:27.928422 extend-filesystems[1424]: Found loop4 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found loop5 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found loop6 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found loop7 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found sda Nov 1 00:20:27.928422 extend-filesystems[1424]: Found sda1 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found sda2 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found sda3 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found usr Nov 1 00:20:27.928422 extend-filesystems[1424]: Found sda4 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found sda6 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found sda7 Nov 1 00:20:27.928422 extend-filesystems[1424]: Found sda9 Nov 1 00:20:27.928422 extend-filesystems[1424]: Checking size of /dev/sda9 Nov 1 00:20:28.183432 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Nov 1 00:20:28.183497 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1282) Nov 1 00:20:28.183535 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Nov 1 00:20:27.940500 dbus-daemon[1422]: [system] SELinux support is enabled Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: ntpd 4.2.8p17@1.4004-o Fri Oct 31 22:05:56 UTC 2025 (1): Starting Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: ---------------------------------------------------- Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: ntp-4 is maintained by Network Time Foundation, Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: corporation. Support and training for ntp-4 are Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: available at https://www.nwtime.org/support Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: ---------------------------------------------------- Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: proto: precision = 0.088 usec (-23) Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: basedate set to 2025-10-19 Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: gps base set to 2025-10-19 (week 2389) Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: Listen and drop on 0 v6wildcard [::]:123 Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: Listen normally on 2 lo 127.0.0.1:123 Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: Listen normally on 3 eth0 10.128.0.8:123 Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: Listen normally on 4 lo [::1]:123 Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: bind(21) AF_INET6 fe80::4001:aff:fe80:8%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:8%2#123 Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:8%2 Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: Listening on routing socket on fd #21 for interface updates Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:20:28.184080 ntpd[1429]: 1 Nov 00:20:27 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:20:27.943926 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:20:28.188762 extend-filesystems[1424]: Resized partition /dev/sda9 Nov 1 00:20:27.948150 dbus-daemon[1422]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1357 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:20:27.959846 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:20:28.196363 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:20:27.967001 ntpd[1429]: ntpd 4.2.8p17@1.4004-o Fri Oct 31 22:05:56 UTC 2025 (1): Starting Nov 1 00:20:28.007546 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Nov 1 00:20:27.967033 ntpd[1429]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 1 00:20:28.009153 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:20:27.967048 ntpd[1429]: ---------------------------------------------------- Nov 1 00:20:28.021063 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:20:27.967062 ntpd[1429]: ntp-4 is maintained by Network Time Foundation, Nov 1 00:20:28.035147 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:20:27.967077 ntpd[1429]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 1 00:20:28.057845 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:20:28.212926 jq[1450]: true Nov 1 00:20:27.967091 ntpd[1429]: corporation. Support and training for ntp-4 are Nov 1 00:20:28.088325 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:20:27.967106 ntpd[1429]: available at https://www.nwtime.org/support Nov 1 00:20:28.089780 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:20:28.217108 extend-filesystems[1445]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 1 00:20:28.217108 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 1 00:20:28.217108 extend-filesystems[1445]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Nov 1 00:20:27.967120 ntpd[1429]: ---------------------------------------------------- Nov 1 00:20:28.090345 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:20:28.263507 update_engine[1449]: I20251101 00:20:28.227314 1449 main.cc:92] Flatcar Update Engine starting Nov 1 00:20:28.263507 update_engine[1449]: I20251101 00:20:28.237053 1449 update_check_scheduler.cc:74] Next update check in 3m29s Nov 1 00:20:28.263914 extend-filesystems[1424]: Resized filesystem in /dev/sda9 Nov 1 00:20:27.972713 ntpd[1429]: proto: precision = 0.088 usec (-23) Nov 1 00:20:28.091910 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:20:27.976051 ntpd[1429]: basedate set to 2025-10-19 Nov 1 00:20:28.114306 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:20:27.976080 ntpd[1429]: gps base set to 2025-10-19 (week 2389) Nov 1 00:20:28.115047 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:20:27.987765 ntpd[1429]: Listen and drop on 0 v6wildcard [::]:123 Nov 1 00:20:28.189611 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:20:27.987834 ntpd[1429]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 1 00:20:28.221502 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:20:27.988166 ntpd[1429]: Listen normally on 2 lo 127.0.0.1:123 Nov 1 00:20:28.222753 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:20:27.988251 ntpd[1429]: Listen normally on 3 eth0 10.128.0.8:123 Nov 1 00:20:27.988325 ntpd[1429]: Listen normally on 4 lo [::1]:123 Nov 1 00:20:27.988404 ntpd[1429]: bind(21) AF_INET6 fe80::4001:aff:fe80:8%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:20:27.988439 ntpd[1429]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:8%2#123 Nov 1 00:20:27.988460 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:8%2 Nov 1 00:20:27.988508 ntpd[1429]: Listening on routing socket on fd #21 for interface updates Nov 1 00:20:27.991313 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:20:27.991355 ntpd[1429]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:20:28.203150 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:20:28.296715 jq[1459]: true Nov 1 00:20:28.311193 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:20:28.321999 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:20:28.337443 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:20:28.348418 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:20:28.348575 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:20:28.348617 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:20:28.354952 tar[1457]: linux-amd64/LICENSE Nov 1 00:20:28.356309 tar[1457]: linux-amd64/helm Nov 1 00:20:28.368564 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 1 00:20:28.378854 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:20:28.378910 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:20:28.398956 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:20:28.433069 systemd-logind[1442]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 00:20:28.433108 systemd-logind[1442]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 1 00:20:28.433140 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:20:28.441817 systemd-logind[1442]: New seat seat0. Nov 1 00:20:28.445509 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:20:28.520099 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:20:28.521002 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:20:28.548784 systemd[1]: Starting sshkeys.service... Nov 1 00:20:28.619917 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:20:28.645515 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:20:28.801164 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:20:28.801401 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 1 00:20:28.803622 dbus-daemon[1422]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1477 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:20:28.827196 systemd[1]: Starting polkit.service - Authorization Manager... Nov 1 00:20:28.863050 coreos-metadata[1499]: Nov 01 00:20:28.862 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Nov 1 00:20:28.869839 coreos-metadata[1499]: Nov 01 00:20:28.869 INFO Fetch failed with 404: resource not found Nov 1 00:20:28.870480 coreos-metadata[1499]: Nov 01 00:20:28.870 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Nov 1 00:20:28.872576 coreos-metadata[1499]: Nov 01 00:20:28.872 INFO Fetch successful Nov 1 00:20:28.872787 coreos-metadata[1499]: Nov 01 00:20:28.872 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Nov 1 00:20:28.879796 coreos-metadata[1499]: Nov 01 00:20:28.878 INFO Fetch failed with 404: resource not found Nov 1 00:20:28.879796 coreos-metadata[1499]: Nov 01 00:20:28.878 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Nov 1 00:20:28.882417 coreos-metadata[1499]: Nov 01 00:20:28.882 INFO Fetch failed with 404: resource not found Nov 1 00:20:28.882417 coreos-metadata[1499]: Nov 01 00:20:28.882 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Nov 1 00:20:28.884927 coreos-metadata[1499]: Nov 01 00:20:28.884 INFO Fetch successful Nov 1 00:20:28.895508 unknown[1499]: wrote ssh authorized keys file for user: core Nov 1 00:20:28.933309 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:20:28.960410 update-ssh-keys[1508]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:20:28.960263 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:20:28.967612 ntpd[1429]: bind(24) AF_INET6 fe80::4001:aff:fe80:8%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:20:28.969261 ntpd[1429]: 1 Nov 00:20:28 ntpd[1429]: bind(24) AF_INET6 fe80::4001:aff:fe80:8%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:20:28.969261 ntpd[1429]: 1 Nov 00:20:28 ntpd[1429]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:8%2#123 Nov 1 00:20:28.969261 ntpd[1429]: 1 Nov 00:20:28 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:8%2 Nov 1 00:20:28.967659 ntpd[1429]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:8%2#123 Nov 1 00:20:28.967711 ntpd[1429]: failed to init interface for address fe80::4001:aff:fe80:8%2 Nov 1 00:20:28.982951 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:20:28.984344 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:20:28.989913 polkitd[1502]: Started polkitd version 121 Nov 1 00:20:28.994895 systemd[1]: Finished sshkeys.service. Nov 1 00:20:29.011337 polkitd[1502]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:20:29.011450 polkitd[1502]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:20:29.012441 polkitd[1502]: Finished loading, compiling and executing 2 rules Nov 1 00:20:29.016576 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:20:29.017709 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:20:29.018082 polkitd[1502]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:20:29.034164 systemd[1]: Started sshd@0-10.128.0.8:22-147.75.109.163:33228.service - OpenSSH per-connection server daemon (147.75.109.163:33228). Nov 1 00:20:29.048176 systemd[1]: Started polkit.service - Authorization Manager. Nov 1 00:20:29.064488 systemd-hostnamed[1477]: Hostname set to (transient) Nov 1 00:20:29.068007 systemd-resolved[1359]: System hostname changed to 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9'. Nov 1 00:20:29.072714 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:20:29.073444 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:20:29.090273 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:20:29.104746 containerd[1460]: time="2025-11-01T00:20:29.104498331Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:20:29.149316 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:20:29.177062 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:20:29.192869 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:20:29.201531 containerd[1460]: time="2025-11-01T00:20:29.201242452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:29.203127 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:20:29.208197 containerd[1460]: time="2025-11-01T00:20:29.207833752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:29.208197 containerd[1460]: time="2025-11-01T00:20:29.207906926Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:20:29.208197 containerd[1460]: time="2025-11-01T00:20:29.207953251Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:20:29.208523 containerd[1460]: time="2025-11-01T00:20:29.208495011Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:20:29.208646 containerd[1460]: time="2025-11-01T00:20:29.208626611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:29.210409 containerd[1460]: time="2025-11-01T00:20:29.209742052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:29.210409 containerd[1460]: time="2025-11-01T00:20:29.209798358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:29.210409 containerd[1460]: time="2025-11-01T00:20:29.210169875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:29.210409 containerd[1460]: time="2025-11-01T00:20:29.210198880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:29.210409 containerd[1460]: time="2025-11-01T00:20:29.210225382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:29.210409 containerd[1460]: time="2025-11-01T00:20:29.210261539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:29.210920 containerd[1460]: time="2025-11-01T00:20:29.210774411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:29.212684 containerd[1460]: time="2025-11-01T00:20:29.212049887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:29.212684 containerd[1460]: time="2025-11-01T00:20:29.212604803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:29.212684 containerd[1460]: time="2025-11-01T00:20:29.212637885Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:20:29.214773 containerd[1460]: time="2025-11-01T00:20:29.213798338Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:20:29.214773 containerd[1460]: time="2025-11-01T00:20:29.213924851Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:20:29.221183 containerd[1460]: time="2025-11-01T00:20:29.221139408Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:20:29.221484 containerd[1460]: time="2025-11-01T00:20:29.221459874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:20:29.222033 containerd[1460]: time="2025-11-01T00:20:29.221637373Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:20:29.222033 containerd[1460]: time="2025-11-01T00:20:29.221692615Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:20:29.222033 containerd[1460]: time="2025-11-01T00:20:29.221721581Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:20:29.222033 containerd[1460]: time="2025-11-01T00:20:29.221916403Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:20:29.222796 containerd[1460]: time="2025-11-01T00:20:29.222767902Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:20:29.223066 containerd[1460]: time="2025-11-01T00:20:29.223041732Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223167499Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223196519Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223222502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223245254Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223267808Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223293093Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223318563Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223343105Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223364942Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223385860Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223419281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223441736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223463046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.223691 containerd[1460]: time="2025-11-01T00:20:29.223488163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.224315 containerd[1460]: time="2025-11-01T00:20:29.223509580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.224315 containerd[1460]: time="2025-11-01T00:20:29.223532045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.224315 containerd[1460]: time="2025-11-01T00:20:29.223552273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.224315 containerd[1460]: time="2025-11-01T00:20:29.223574898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.224315 containerd[1460]: time="2025-11-01T00:20:29.223597110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.224315 containerd[1460]: time="2025-11-01T00:20:29.223620964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.224315 containerd[1460]: time="2025-11-01T00:20:29.223640337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.224623308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.224681952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.224713069Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.224748445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.224769159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.224788895Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.224888025Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.224920189Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.225020419Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.225044110Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.225061686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.225082690Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.225101237Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:20:29.226450 containerd[1460]: time="2025-11-01T00:20:29.225123888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:20:29.227122 containerd[1460]: time="2025-11-01T00:20:29.225609849Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:20:29.227122 containerd[1460]: time="2025-11-01T00:20:29.225730226Z" level=info msg="Connect containerd service" Nov 1 00:20:29.227122 containerd[1460]: time="2025-11-01T00:20:29.225784538Z" level=info msg="using legacy CRI server" Nov 1 00:20:29.227122 containerd[1460]: time="2025-11-01T00:20:29.225797570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:20:29.227122 containerd[1460]: time="2025-11-01T00:20:29.225985057Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:20:29.228128 containerd[1460]: time="2025-11-01T00:20:29.228075325Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:20:29.228397 containerd[1460]: time="2025-11-01T00:20:29.228347149Z" level=info msg="Start subscribing containerd event" Nov 1 00:20:29.229840 containerd[1460]: time="2025-11-01T00:20:29.229807660Z" level=info msg="Start recovering state" Nov 1 00:20:29.230051 containerd[1460]: time="2025-11-01T00:20:29.230016856Z" level=info msg="Start event monitor" Nov 1 00:20:29.230121 containerd[1460]: time="2025-11-01T00:20:29.230063392Z" level=info msg="Start snapshots syncer" Nov 1 00:20:29.230121 containerd[1460]: time="2025-11-01T00:20:29.230081296Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:20:29.230121 containerd[1460]: time="2025-11-01T00:20:29.230096613Z" level=info msg="Start streaming server" Nov 1 00:20:29.230489 containerd[1460]: time="2025-11-01T00:20:29.229350815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:20:29.230592 containerd[1460]: time="2025-11-01T00:20:29.230557637Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:20:29.230944 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:20:29.232050 containerd[1460]: time="2025-11-01T00:20:29.231009616Z" level=info msg="containerd successfully booted in 0.129852s" Nov 1 00:20:29.427223 sshd[1528]: Accepted publickey for core from 147.75.109.163 port 33228 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:20:29.429799 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:29.447487 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:20:29.465770 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:20:29.472353 tar[1457]: linux-amd64/README.md Nov 1 00:20:29.493428 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:20:29.493839 systemd-logind[1442]: New session 1 of user core. Nov 1 00:20:29.515648 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:20:29.535549 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:20:29.563516 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:20:29.617954 systemd-networkd[1357]: eth0: Gained IPv6LL Nov 1 00:20:29.622932 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:20:29.635604 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:20:29.659936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:20:29.679533 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:20:29.697077 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Nov 1 00:20:29.729227 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:20:29.731884 init.sh[1556]: + '[' -e /etc/default/instance_configs.cfg.template ']' Nov 1 00:20:29.731884 init.sh[1556]: + echo -e '[InstanceSetup]\nset_host_keys = false' Nov 1 00:20:29.731884 init.sh[1556]: + /usr/bin/google_instance_setup Nov 1 00:20:29.770370 systemd[1547]: Queued start job for default target default.target. Nov 1 00:20:29.776389 systemd[1547]: Created slice app.slice - User Application Slice. Nov 1 00:20:29.776435 systemd[1547]: Reached target paths.target - Paths. Nov 1 00:20:29.776461 systemd[1547]: Reached target timers.target - Timers. Nov 1 00:20:29.783854 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:20:29.808864 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:20:29.811216 systemd[1547]: Reached target sockets.target - Sockets. Nov 1 00:20:29.811455 systemd[1547]: Reached target basic.target - Basic System. Nov 1 00:20:29.811540 systemd[1547]: Reached target default.target - Main User Target. Nov 1 00:20:29.811597 systemd[1547]: Startup finished in 239ms. Nov 1 00:20:29.812080 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:20:29.829353 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:20:30.081130 systemd[1]: Started sshd@1-10.128.0.8:22-147.75.109.163:33232.service - OpenSSH per-connection server daemon (147.75.109.163:33232). Nov 1 00:20:30.409300 sshd[1571]: Accepted publickey for core from 147.75.109.163 port 33232 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:20:30.408790 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:30.423739 systemd-logind[1442]: New session 2 of user core. Nov 1 00:20:30.428959 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:20:30.431285 instance-setup[1564]: INFO Running google_set_multiqueue. Nov 1 00:20:30.459147 instance-setup[1564]: INFO Set channels for eth0 to 2. Nov 1 00:20:30.464923 instance-setup[1564]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Nov 1 00:20:30.466812 instance-setup[1564]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Nov 1 00:20:30.467069 instance-setup[1564]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Nov 1 00:20:30.468613 instance-setup[1564]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Nov 1 00:20:30.469036 instance-setup[1564]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Nov 1 00:20:30.471700 instance-setup[1564]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Nov 1 00:20:30.471777 instance-setup[1564]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Nov 1 00:20:30.473298 instance-setup[1564]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Nov 1 00:20:30.482765 instance-setup[1564]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 1 00:20:30.487258 instance-setup[1564]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 1 00:20:30.489315 instance-setup[1564]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Nov 1 00:20:30.489364 instance-setup[1564]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Nov 1 00:20:30.513641 init.sh[1556]: + /usr/bin/google_metadata_script_runner --script-type startup Nov 1 00:20:30.636251 sshd[1571]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:30.644123 systemd[1]: sshd@1-10.128.0.8:22-147.75.109.163:33232.service: Deactivated successfully. Nov 1 00:20:30.648584 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:20:30.650507 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:20:30.653153 systemd-logind[1442]: Removed session 2. Nov 1 00:20:30.686839 systemd[1]: Started sshd@2-10.128.0.8:22-147.75.109.163:60502.service - OpenSSH per-connection server daemon (147.75.109.163:60502). Nov 1 00:20:30.703074 startup-script[1604]: INFO Starting startup scripts. Nov 1 00:20:30.709788 startup-script[1604]: INFO No startup scripts found in metadata. Nov 1 00:20:30.709867 startup-script[1604]: INFO Finished running startup scripts. Nov 1 00:20:30.739604 init.sh[1556]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Nov 1 00:20:30.739604 init.sh[1556]: + daemon_pids=() Nov 1 00:20:30.739604 init.sh[1556]: + for d in accounts clock_skew network Nov 1 00:20:30.739604 init.sh[1556]: + daemon_pids+=($!) Nov 1 00:20:30.739604 init.sh[1556]: + for d in accounts clock_skew network Nov 1 00:20:30.739604 init.sh[1556]: + daemon_pids+=($!) Nov 1 00:20:30.739604 init.sh[1556]: + for d in accounts clock_skew network Nov 1 00:20:30.739604 init.sh[1556]: + daemon_pids+=($!) Nov 1 00:20:30.740438 init.sh[1613]: + /usr/bin/google_accounts_daemon Nov 1 00:20:30.740813 init.sh[1614]: + /usr/bin/google_clock_skew_daemon Nov 1 00:20:30.741541 init.sh[1556]: + NOTIFY_SOCKET=/run/systemd/notify Nov 1 00:20:30.741541 init.sh[1556]: + /usr/bin/systemd-notify --ready Nov 1 00:20:30.743561 init.sh[1615]: + /usr/bin/google_network_daemon Nov 1 00:20:30.767754 systemd[1]: Started oem-gce.service - GCE Linux Agent. Nov 1 00:20:30.786726 init.sh[1556]: + wait -n 1613 1614 1615 Nov 1 00:20:31.022374 sshd[1611]: Accepted publickey for core from 147.75.109.163 port 60502 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:20:31.023381 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:31.042035 systemd-logind[1442]: New session 3 of user core. Nov 1 00:20:31.045932 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:20:31.138207 google-networking[1615]: INFO Starting Google Networking daemon. Nov 1 00:20:31.144555 google-clock-skew[1614]: INFO Starting Google Clock Skew daemon. Nov 1 00:20:31.158553 google-clock-skew[1614]: INFO Clock drift token has changed: 0. Nov 1 00:20:31.228258 groupadd[1626]: group added to /etc/group: name=google-sudoers, GID=1000 Nov 1 00:20:31.233563 groupadd[1626]: group added to /etc/gshadow: name=google-sudoers Nov 1 00:20:31.251455 sshd[1611]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:31.256923 systemd[1]: sshd@2-10.128.0.8:22-147.75.109.163:60502.service: Deactivated successfully. Nov 1 00:20:31.260517 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:20:31.263160 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:20:31.265512 systemd-logind[1442]: Removed session 3. Nov 1 00:20:31.298773 groupadd[1626]: new group: name=google-sudoers, GID=1000 Nov 1 00:20:31.330050 google-accounts[1613]: INFO Starting Google Accounts daemon. Nov 1 00:20:31.000543 systemd-resolved[1359]: Clock change detected. Flushing caches. Nov 1 00:20:31.023477 systemd-journald[1108]: Time jumped backwards, rotating. Nov 1 00:20:31.000836 google-clock-skew[1614]: INFO Synced system time with hardware clock. Nov 1 00:20:31.023936 google-accounts[1613]: WARNING OS Login not installed. Nov 1 00:20:31.026181 google-accounts[1613]: INFO Creating a new user account for 0. Nov 1 00:20:31.032605 init.sh[1638]: useradd: invalid user name '0': use --badname to ignore Nov 1 00:20:31.033144 google-accounts[1613]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Nov 1 00:20:31.390440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:20:31.402728 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:20:31.408170 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:20:31.414274 systemd[1]: Startup finished in 1.093s (kernel) + 10.044s (initrd) + 9.992s (userspace) = 21.130s. Nov 1 00:20:31.635612 ntpd[1429]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:8%2]:123 Nov 1 00:20:31.636225 ntpd[1429]: 1 Nov 00:20:31 ntpd[1429]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:8%2]:123 Nov 1 00:20:32.240804 kubelet[1645]: E1101 00:20:32.240721 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:32.243753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:32.244007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:32.244472 systemd[1]: kubelet.service: Consumed 1.252s CPU time. Nov 1 00:20:40.980117 systemd[1]: Started sshd@3-10.128.0.8:22-147.75.109.163:60384.service - OpenSSH per-connection server daemon (147.75.109.163:60384). Nov 1 00:20:41.277107 sshd[1657]: Accepted publickey for core from 147.75.109.163 port 60384 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:20:41.279436 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:41.285736 systemd-logind[1442]: New session 4 of user core. Nov 1 00:20:41.292978 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:20:41.495767 sshd[1657]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:41.500965 systemd[1]: sshd@3-10.128.0.8:22-147.75.109.163:60384.service: Deactivated successfully. Nov 1 00:20:41.503855 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:20:41.505812 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:20:41.507286 systemd-logind[1442]: Removed session 4. Nov 1 00:20:41.557036 systemd[1]: Started sshd@4-10.128.0.8:22-147.75.109.163:60400.service - OpenSSH per-connection server daemon (147.75.109.163:60400). Nov 1 00:20:41.843255 sshd[1664]: Accepted publickey for core from 147.75.109.163 port 60400 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:20:41.845198 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:41.850431 systemd-logind[1442]: New session 5 of user core. Nov 1 00:20:41.857836 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:20:42.055088 sshd[1664]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:42.059569 systemd[1]: sshd@4-10.128.0.8:22-147.75.109.163:60400.service: Deactivated successfully. Nov 1 00:20:42.061971 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:20:42.064078 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:20:42.065634 systemd-logind[1442]: Removed session 5. Nov 1 00:20:42.109994 systemd[1]: Started sshd@5-10.128.0.8:22-147.75.109.163:60404.service - OpenSSH per-connection server daemon (147.75.109.163:60404). Nov 1 00:20:42.344535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:20:42.353024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:20:42.403619 sshd[1671]: Accepted publickey for core from 147.75.109.163 port 60404 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:20:42.404388 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:42.412539 systemd-logind[1442]: New session 6 of user core. Nov 1 00:20:42.424902 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:20:42.617420 sshd[1671]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:42.622798 systemd[1]: sshd@5-10.128.0.8:22-147.75.109.163:60404.service: Deactivated successfully. Nov 1 00:20:42.625704 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:20:42.626939 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:20:42.629220 systemd-logind[1442]: Removed session 6. Nov 1 00:20:42.681766 systemd[1]: Started sshd@6-10.128.0.8:22-147.75.109.163:60410.service - OpenSSH per-connection server daemon (147.75.109.163:60410). Nov 1 00:20:42.706474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:20:42.719266 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:20:42.771791 kubelet[1688]: E1101 00:20:42.771718 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:42.776063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:42.776317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:42.972402 sshd[1683]: Accepted publickey for core from 147.75.109.163 port 60410 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:20:42.974373 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:42.980758 systemd-logind[1442]: New session 7 of user core. Nov 1 00:20:42.986857 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:20:43.167095 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:20:43.167661 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:20:43.183485 sudo[1696]: pam_unix(sudo:session): session closed for user root Nov 1 00:20:43.227268 sshd[1683]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:43.233498 systemd[1]: sshd@6-10.128.0.8:22-147.75.109.163:60410.service: Deactivated successfully. Nov 1 00:20:43.235740 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:20:43.236711 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:20:43.238228 systemd-logind[1442]: Removed session 7. Nov 1 00:20:43.281980 systemd[1]: Started sshd@7-10.128.0.8:22-147.75.109.163:60422.service - OpenSSH per-connection server daemon (147.75.109.163:60422). Nov 1 00:20:43.567290 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 60422 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:20:43.568853 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:43.576607 systemd-logind[1442]: New session 8 of user core. Nov 1 00:20:43.581888 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:20:43.746063 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:20:43.746607 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:20:43.751967 sudo[1705]: pam_unix(sudo:session): session closed for user root Nov 1 00:20:43.765325 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:20:43.765844 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:20:43.786164 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:20:43.788919 auditctl[1708]: No rules Nov 1 00:20:43.790190 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:20:43.790480 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:20:43.792952 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:20:43.841222 augenrules[1726]: No rules Nov 1 00:20:43.842963 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:20:43.844474 sudo[1704]: pam_unix(sudo:session): session closed for user root Nov 1 00:20:43.888415 sshd[1701]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:43.892806 systemd[1]: sshd@7-10.128.0.8:22-147.75.109.163:60422.service: Deactivated successfully. Nov 1 00:20:43.895205 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:20:43.897159 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:20:43.898648 systemd-logind[1442]: Removed session 8. Nov 1 00:20:43.946973 systemd[1]: Started sshd@8-10.128.0.8:22-147.75.109.163:60438.service - OpenSSH per-connection server daemon (147.75.109.163:60438). Nov 1 00:20:44.237416 sshd[1734]: Accepted publickey for core from 147.75.109.163 port 60438 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:20:44.239391 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:20:44.245166 systemd-logind[1442]: New session 9 of user core. Nov 1 00:20:44.252875 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:20:44.415544 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:20:44.416104 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:20:44.875027 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:20:44.875252 (dockerd)[1752]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:20:45.335408 dockerd[1752]: time="2025-11-01T00:20:45.335213589Z" level=info msg="Starting up" Nov 1 00:20:45.457325 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3299709855-merged.mount: Deactivated successfully. Nov 1 00:20:45.485919 dockerd[1752]: time="2025-11-01T00:20:45.485825488Z" level=info msg="Loading containers: start." Nov 1 00:20:45.652816 kernel: Initializing XFRM netlink socket Nov 1 00:20:45.773371 systemd-networkd[1357]: docker0: Link UP Nov 1 00:20:45.796566 dockerd[1752]: time="2025-11-01T00:20:45.796503183Z" level=info msg="Loading containers: done." Nov 1 00:20:45.821193 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3212520415-merged.mount: Deactivated successfully. Nov 1 00:20:45.821740 dockerd[1752]: time="2025-11-01T00:20:45.821169150Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:20:45.822158 dockerd[1752]: time="2025-11-01T00:20:45.821850496Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:20:45.822158 dockerd[1752]: time="2025-11-01T00:20:45.822085165Z" level=info msg="Daemon has completed initialization" Nov 1 00:20:45.865786 dockerd[1752]: time="2025-11-01T00:20:45.865690408Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:20:45.866178 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:20:46.794783 containerd[1460]: time="2025-11-01T00:20:46.794288074Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:20:47.346272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214740261.mount: Deactivated successfully. Nov 1 00:20:49.215933 containerd[1460]: time="2025-11-01T00:20:49.215847683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:49.217829 containerd[1460]: time="2025-11-01T00:20:49.217550433Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27072975" Nov 1 00:20:49.219907 containerd[1460]: time="2025-11-01T00:20:49.219149568Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:49.223511 containerd[1460]: time="2025-11-01T00:20:49.223465052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:49.225402 containerd[1460]: time="2025-11-01T00:20:49.225143748Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.430795691s" Nov 1 00:20:49.225402 containerd[1460]: time="2025-11-01T00:20:49.225198041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:20:49.226704 containerd[1460]: time="2025-11-01T00:20:49.226661228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:20:50.857582 containerd[1460]: time="2025-11-01T00:20:50.857507564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:50.859291 containerd[1460]: time="2025-11-01T00:20:50.859200617Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21161691" Nov 1 00:20:50.861474 containerd[1460]: time="2025-11-01T00:20:50.860870183Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:50.864994 containerd[1460]: time="2025-11-01T00:20:50.864927547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:50.867461 containerd[1460]: time="2025-11-01T00:20:50.866455731Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.639751909s" Nov 1 00:20:50.867461 containerd[1460]: time="2025-11-01T00:20:50.866505461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:20:50.868078 containerd[1460]: time="2025-11-01T00:20:50.868039320Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:20:52.137660 containerd[1460]: time="2025-11-01T00:20:52.137580351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:52.139479 containerd[1460]: time="2025-11-01T00:20:52.139168581Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15727009" Nov 1 00:20:52.142047 containerd[1460]: time="2025-11-01T00:20:52.140902814Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:52.150451 containerd[1460]: time="2025-11-01T00:20:52.150388611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:52.151421 containerd[1460]: time="2025-11-01T00:20:52.151374274Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.283295946s" Nov 1 00:20:52.151575 containerd[1460]: time="2025-11-01T00:20:52.151548954Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 00:20:52.152468 containerd[1460]: time="2025-11-01T00:20:52.152319777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:20:53.026618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:20:53.038728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:20:53.354889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:20:53.366776 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:20:53.469939 kubelet[1968]: E1101 00:20:53.469867 1968 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:53.473052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:53.473307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:53.500259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount478797752.mount: Deactivated successfully. Nov 1 00:20:54.007960 containerd[1460]: time="2025-11-01T00:20:54.007886834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:54.009542 containerd[1460]: time="2025-11-01T00:20:54.009329941Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25966594" Nov 1 00:20:54.012601 containerd[1460]: time="2025-11-01T00:20:54.011018004Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:54.015025 containerd[1460]: time="2025-11-01T00:20:54.014057969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:54.015528 containerd[1460]: time="2025-11-01T00:20:54.015470131Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.863093157s" Nov 1 00:20:54.015726 containerd[1460]: time="2025-11-01T00:20:54.015684616Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 00:20:54.017612 containerd[1460]: time="2025-11-01T00:20:54.017565422Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:20:54.511291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166479114.mount: Deactivated successfully. Nov 1 00:20:55.881943 containerd[1460]: time="2025-11-01T00:20:55.881868483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:55.883733 containerd[1460]: time="2025-11-01T00:20:55.883643614Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22394649" Nov 1 00:20:55.886631 containerd[1460]: time="2025-11-01T00:20:55.885317039Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:55.893149 containerd[1460]: time="2025-11-01T00:20:55.893088243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:55.894815 containerd[1460]: time="2025-11-01T00:20:55.894763795Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.877029765s" Nov 1 00:20:55.894939 containerd[1460]: time="2025-11-01T00:20:55.894819799Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 00:20:55.895619 containerd[1460]: time="2025-11-01T00:20:55.895568328Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:20:56.324386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119379843.mount: Deactivated successfully. Nov 1 00:20:56.335405 containerd[1460]: time="2025-11-01T00:20:56.335343546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:56.336884 containerd[1460]: time="2025-11-01T00:20:56.336508475Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=322152" Nov 1 00:20:56.340624 containerd[1460]: time="2025-11-01T00:20:56.338394527Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:56.342370 containerd[1460]: time="2025-11-01T00:20:56.342331335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:56.343767 containerd[1460]: time="2025-11-01T00:20:56.343717972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 448.07596ms" Nov 1 00:20:56.343869 containerd[1460]: time="2025-11-01T00:20:56.343772135Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 00:20:56.344650 containerd[1460]: time="2025-11-01T00:20:56.344622123Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:20:58.741314 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:20:59.652011 containerd[1460]: time="2025-11-01T00:20:59.651937530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:59.653771 containerd[1460]: time="2025-11-01T00:20:59.653706760Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73518341" Nov 1 00:20:59.656628 containerd[1460]: time="2025-11-01T00:20:59.654939664Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:59.659266 containerd[1460]: time="2025-11-01T00:20:59.659213659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:20:59.661020 containerd[1460]: time="2025-11-01T00:20:59.660978623Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.316205381s" Nov 1 00:20:59.661168 containerd[1460]: time="2025-11-01T00:20:59.661143423Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 00:21:03.723898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:21:03.734726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:04.113878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:04.125500 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:04.191701 kubelet[2109]: E1101 00:21:04.191646 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:04.195790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:04.196210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:05.339162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:05.346980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:05.406372 systemd[1]: Reloading requested from client PID 2123 ('systemctl') (unit session-9.scope)... Nov 1 00:21:05.406618 systemd[1]: Reloading... Nov 1 00:21:05.562620 zram_generator::config[2163]: No configuration found. Nov 1 00:21:05.719852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:05.823625 systemd[1]: Reloading finished in 416 ms. Nov 1 00:21:05.892216 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:21:05.892358 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:21:05.892893 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:05.897064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:06.270742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:06.286205 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:21:06.343383 kubelet[2214]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:21:06.343383 kubelet[2214]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:06.343902 kubelet[2214]: I1101 00:21:06.343438 2214 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:21:06.777639 kubelet[2214]: I1101 00:21:06.777565 2214 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:21:06.777639 kubelet[2214]: I1101 00:21:06.777626 2214 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:21:06.778056 kubelet[2214]: I1101 00:21:06.777666 2214 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:21:06.778056 kubelet[2214]: I1101 00:21:06.777682 2214 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:21:06.778235 kubelet[2214]: I1101 00:21:06.778186 2214 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:21:06.788075 kubelet[2214]: E1101 00:21:06.787832 2214 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.8:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:21:06.788075 kubelet[2214]: I1101 00:21:06.787854 2214 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:21:06.793558 kubelet[2214]: E1101 00:21:06.793467 2214 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:21:06.793734 kubelet[2214]: I1101 00:21:06.793664 2214 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:21:06.797195 kubelet[2214]: I1101 00:21:06.797124 2214 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:21:06.797678 kubelet[2214]: I1101 00:21:06.797631 2214 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:21:06.798611 kubelet[2214]: I1101 00:21:06.797679 2214 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:21:06.798611 kubelet[2214]: I1101 00:21:06.798128 2214 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:21:06.798611 kubelet[2214]: I1101 00:21:06.798148 2214 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:21:06.798611 kubelet[2214]: I1101 00:21:06.798294 2214 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:21:06.801712 kubelet[2214]: I1101 00:21:06.801676 2214 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:06.803851 kubelet[2214]: I1101 00:21:06.803807 2214 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:21:06.803851 kubelet[2214]: I1101 00:21:06.803850 2214 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:21:06.803996 kubelet[2214]: I1101 00:21:06.803923 2214 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:21:06.803996 kubelet[2214]: I1101 00:21:06.803985 2214 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:21:06.809607 kubelet[2214]: E1101 00:21:06.809550 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9&limit=500&resourceVersion=0\": dial tcp 10.128.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:21:06.809836 kubelet[2214]: E1101 00:21:06.809779 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:21:06.809980 kubelet[2214]: I1101 00:21:06.809952 2214 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:21:06.810606 kubelet[2214]: I1101 00:21:06.810556 2214 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:21:06.810686 kubelet[2214]: I1101 00:21:06.810619 2214 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:21:06.810746 kubelet[2214]: W1101 00:21:06.810698 2214 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:21:06.827527 kubelet[2214]: I1101 00:21:06.827479 2214 server.go:1262] "Started kubelet" Nov 1 00:21:06.828832 kubelet[2214]: I1101 00:21:06.828789 2214 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:21:06.844193 kubelet[2214]: E1101 00:21:06.840420 2214 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9.1873ba1242f33d51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,UID:ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,},FirstTimestamp:2025-11-01 00:21:06.827410769 +0000 UTC m=+0.536285310,LastTimestamp:2025-11-01 00:21:06.827410769 +0000 UTC m=+0.536285310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,}" Nov 1 00:21:06.845163 kubelet[2214]: I1101 00:21:06.845116 2214 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:21:06.847775 kubelet[2214]: I1101 00:21:06.847726 2214 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:21:06.849807 kubelet[2214]: I1101 00:21:06.849781 2214 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:21:06.850244 kubelet[2214]: I1101 00:21:06.850204 2214 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:21:06.850323 kubelet[2214]: I1101 00:21:06.850271 2214 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:21:06.850747 kubelet[2214]: I1101 00:21:06.850721 2214 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:21:06.854579 kubelet[2214]: I1101 00:21:06.854545 2214 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:21:06.854865 kubelet[2214]: I1101 00:21:06.854846 2214 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:21:06.855304 kubelet[2214]: E1101 00:21:06.855274 2214 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" Nov 1 00:21:06.861543 kubelet[2214]: E1101 00:21:06.859625 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9?timeout=10s\": dial tcp 10.128.0.8:6443: connect: connection refused" interval="200ms" Nov 1 00:21:06.861543 kubelet[2214]: I1101 00:21:06.859708 2214 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:21:06.864459 kubelet[2214]: I1101 00:21:06.864436 2214 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:21:06.865185 kubelet[2214]: I1101 00:21:06.865161 2214 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:21:06.865489 kubelet[2214]: I1101 00:21:06.865464 2214 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:21:06.868578 kubelet[2214]: E1101 00:21:06.868552 2214 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:21:06.869029 kubelet[2214]: I1101 00:21:06.869008 2214 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:21:06.879162 kubelet[2214]: I1101 00:21:06.879121 2214 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:21:06.879162 kubelet[2214]: I1101 00:21:06.879160 2214 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:21:06.879335 kubelet[2214]: I1101 00:21:06.879191 2214 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:21:06.879335 kubelet[2214]: E1101 00:21:06.879247 2214 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:21:06.883037 kubelet[2214]: E1101 00:21:06.883000 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:21:06.886363 kubelet[2214]: E1101 00:21:06.886315 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:21:06.912224 kubelet[2214]: I1101 00:21:06.912189 2214 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:21:06.912450 kubelet[2214]: I1101 00:21:06.912430 2214 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:21:06.912630 kubelet[2214]: I1101 00:21:06.912609 2214 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:06.915879 kubelet[2214]: I1101 00:21:06.915847 2214 policy_none.go:49] "None policy: Start" Nov 1 00:21:06.916364 kubelet[2214]: I1101 00:21:06.916021 2214 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:21:06.916364 kubelet[2214]: I1101 00:21:06.916046 2214 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:21:06.918201 kubelet[2214]: I1101 00:21:06.918072 2214 policy_none.go:47] "Start" Nov 1 00:21:06.924645 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:21:06.941034 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:21:06.945438 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:21:06.954887 kubelet[2214]: E1101 00:21:06.954853 2214 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:21:06.955318 kubelet[2214]: I1101 00:21:06.955134 2214 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:21:06.955318 kubelet[2214]: I1101 00:21:06.955157 2214 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:21:06.956121 kubelet[2214]: I1101 00:21:06.956012 2214 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:21:06.958986 kubelet[2214]: E1101 00:21:06.958865 2214 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:21:06.958986 kubelet[2214]: E1101 00:21:06.958949 2214 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" Nov 1 00:21:07.004382 systemd[1]: Created slice kubepods-burstable-pod148ff777142ac6f17d9c431b885bf305.slice - libcontainer container kubepods-burstable-pod148ff777142ac6f17d9c431b885bf305.slice. Nov 1 00:21:07.013528 kubelet[2214]: E1101 00:21:07.012783 2214 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.019862 systemd[1]: Created slice kubepods-burstable-pod50e2a0b4dc0a7a5984c9ec4dce856790.slice - libcontainer container kubepods-burstable-pod50e2a0b4dc0a7a5984c9ec4dce856790.slice. Nov 1 00:21:07.023401 kubelet[2214]: E1101 00:21:07.023365 2214 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.027044 systemd[1]: Created slice kubepods-burstable-pod6043162b08e884c0a4c91453d7d4467f.slice - libcontainer container kubepods-burstable-pod6043162b08e884c0a4c91453d7d4467f.slice. Nov 1 00:21:07.030673 kubelet[2214]: E1101 00:21:07.029666 2214 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.061347 kubelet[2214]: E1101 00:21:07.060800 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9?timeout=10s\": dial tcp 10.128.0.8:6443: connect: connection refused" interval="400ms" Nov 1 00:21:07.061347 kubelet[2214]: I1101 00:21:07.060850 2214 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.061347 kubelet[2214]: E1101 00:21:07.061237 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.8:6443/api/v1/nodes\": dial tcp 10.128.0.8:6443: connect: connection refused" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.065539 kubelet[2214]: I1101 00:21:07.065486 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.065715 kubelet[2214]: I1101 00:21:07.065544 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.065715 kubelet[2214]: I1101 00:21:07.065576 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.065715 kubelet[2214]: I1101 00:21:07.065702 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.065904 kubelet[2214]: I1101 00:21:07.065737 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6043162b08e884c0a4c91453d7d4467f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"6043162b08e884c0a4c91453d7d4467f\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.065904 kubelet[2214]: I1101 00:21:07.065768 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/148ff777142ac6f17d9c431b885bf305-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"148ff777142ac6f17d9c431b885bf305\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.065904 kubelet[2214]: I1101 00:21:07.065794 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/148ff777142ac6f17d9c431b885bf305-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"148ff777142ac6f17d9c431b885bf305\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.065904 kubelet[2214]: I1101 00:21:07.065823 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/148ff777142ac6f17d9c431b885bf305-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"148ff777142ac6f17d9c431b885bf305\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.066080 kubelet[2214]: I1101 00:21:07.065873 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.266873 kubelet[2214]: I1101 00:21:07.266749 2214 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.267347 kubelet[2214]: E1101 00:21:07.267256 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.8:6443/api/v1/nodes\": dial tcp 10.128.0.8:6443: connect: connection refused" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.319121 containerd[1460]: time="2025-11-01T00:21:07.318973142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,Uid:148ff777142ac6f17d9c431b885bf305,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:07.328156 containerd[1460]: time="2025-11-01T00:21:07.328086316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,Uid:50e2a0b4dc0a7a5984c9ec4dce856790,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:07.334719 containerd[1460]: time="2025-11-01T00:21:07.333753732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,Uid:6043162b08e884c0a4c91453d7d4467f,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:07.462620 kubelet[2214]: E1101 00:21:07.462518 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9?timeout=10s\": dial tcp 10.128.0.8:6443: connect: connection refused" interval="800ms" Nov 1 00:21:07.651089 kubelet[2214]: E1101 00:21:07.650904 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9&limit=500&resourceVersion=0\": dial tcp 10.128.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:21:07.672399 kubelet[2214]: I1101 00:21:07.672341 2214 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.672951 kubelet[2214]: E1101 00:21:07.672897 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.8:6443/api/v1/nodes\": dial tcp 10.128.0.8:6443: connect: connection refused" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:07.724898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931753108.mount: Deactivated successfully. Nov 1 00:21:07.735520 containerd[1460]: time="2025-11-01T00:21:07.735442852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:07.738406 containerd[1460]: time="2025-11-01T00:21:07.738331798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:21:07.739760 containerd[1460]: time="2025-11-01T00:21:07.739713075Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:07.741616 containerd[1460]: time="2025-11-01T00:21:07.741525126Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:07.743083 containerd[1460]: time="2025-11-01T00:21:07.743029970Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:07.744276 containerd[1460]: time="2025-11-01T00:21:07.744211202Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Nov 1 00:21:07.745846 containerd[1460]: time="2025-11-01T00:21:07.745754173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:21:07.749632 containerd[1460]: time="2025-11-01T00:21:07.748895516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:21:07.750631 containerd[1460]: time="2025-11-01T00:21:07.750571040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.369948ms" Nov 1 00:21:07.752109 containerd[1460]: time="2025-11-01T00:21:07.752052877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 432.967309ms" Nov 1 00:21:07.757130 containerd[1460]: time="2025-11-01T00:21:07.757068060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 423.226349ms" Nov 1 00:21:07.949848 kubelet[2214]: E1101 00:21:07.949691 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:21:07.961988 containerd[1460]: time="2025-11-01T00:21:07.961561364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:07.962456 containerd[1460]: time="2025-11-01T00:21:07.962372600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:07.963127 containerd[1460]: time="2025-11-01T00:21:07.962777293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:07.965026 containerd[1460]: time="2025-11-01T00:21:07.964918837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:07.978954 containerd[1460]: time="2025-11-01T00:21:07.978833804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:07.980704 containerd[1460]: time="2025-11-01T00:21:07.980524814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:07.980982 containerd[1460]: time="2025-11-01T00:21:07.980646081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:07.981972 containerd[1460]: time="2025-11-01T00:21:07.981329533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:07.983273 containerd[1460]: time="2025-11-01T00:21:07.979871812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:07.983273 containerd[1460]: time="2025-11-01T00:21:07.982988702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:07.983273 containerd[1460]: time="2025-11-01T00:21:07.983014147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:07.988342 containerd[1460]: time="2025-11-01T00:21:07.984764939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:08.012852 systemd[1]: Started cri-containerd-760354578240e4cdf3b4fe19d2de8789fc39a0b0b59e57d00ef5131456e57447.scope - libcontainer container 760354578240e4cdf3b4fe19d2de8789fc39a0b0b59e57d00ef5131456e57447. Nov 1 00:21:08.031983 systemd[1]: Started cri-containerd-8f29cf1ec1cdf2eda979f2ba92df6d87ae3fe996a712ad59d7b7493f88e31df9.scope - libcontainer container 8f29cf1ec1cdf2eda979f2ba92df6d87ae3fe996a712ad59d7b7493f88e31df9. Nov 1 00:21:08.040889 systemd[1]: Started cri-containerd-276a57525d661387d75fd19f74f477919c3fb6f260bedb00551df9c58ecad7f4.scope - libcontainer container 276a57525d661387d75fd19f74f477919c3fb6f260bedb00551df9c58ecad7f4. Nov 1 00:21:08.138943 containerd[1460]: time="2025-11-01T00:21:08.138714975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,Uid:148ff777142ac6f17d9c431b885bf305,Namespace:kube-system,Attempt:0,} returns sandbox id \"760354578240e4cdf3b4fe19d2de8789fc39a0b0b59e57d00ef5131456e57447\"" Nov 1 00:21:08.147614 kubelet[2214]: E1101 00:21:08.144994 2214 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f" Nov 1 00:21:08.150322 containerd[1460]: time="2025-11-01T00:21:08.150177616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,Uid:50e2a0b4dc0a7a5984c9ec4dce856790,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f29cf1ec1cdf2eda979f2ba92df6d87ae3fe996a712ad59d7b7493f88e31df9\"" Nov 1 00:21:08.153693 kubelet[2214]: E1101 00:21:08.153612 2214 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d9" Nov 1 00:21:08.153985 containerd[1460]: time="2025-11-01T00:21:08.153940109Z" level=info msg="CreateContainer within sandbox \"760354578240e4cdf3b4fe19d2de8789fc39a0b0b59e57d00ef5131456e57447\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:21:08.156578 kubelet[2214]: E1101 00:21:08.156521 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:21:08.160930 containerd[1460]: time="2025-11-01T00:21:08.160886512Z" level=info msg="CreateContainer within sandbox \"8f29cf1ec1cdf2eda979f2ba92df6d87ae3fe996a712ad59d7b7493f88e31df9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:21:08.182504 containerd[1460]: time="2025-11-01T00:21:08.182292980Z" level=info msg="CreateContainer within sandbox \"760354578240e4cdf3b4fe19d2de8789fc39a0b0b59e57d00ef5131456e57447\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"166b76239a29ed0568c57634efa33e8e85cada8ddca8fae8d0b133d634939e8c\"" Nov 1 00:21:08.183923 containerd[1460]: time="2025-11-01T00:21:08.183884077Z" level=info msg="StartContainer for \"166b76239a29ed0568c57634efa33e8e85cada8ddca8fae8d0b133d634939e8c\"" Nov 1 00:21:08.192424 containerd[1460]: time="2025-11-01T00:21:08.192344041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9,Uid:6043162b08e884c0a4c91453d7d4467f,Namespace:kube-system,Attempt:0,} returns sandbox id \"276a57525d661387d75fd19f74f477919c3fb6f260bedb00551df9c58ecad7f4\"" Nov 1 00:21:08.196157 containerd[1460]: time="2025-11-01T00:21:08.196056494Z" level=info msg="CreateContainer within sandbox \"8f29cf1ec1cdf2eda979f2ba92df6d87ae3fe996a712ad59d7b7493f88e31df9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5582d66259535f40db918f381babfe909d8c9a49b487b067ca2f9d106f44a2e9\"" Nov 1 00:21:08.196291 kubelet[2214]: E1101 00:21:08.196096 2214 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f" Nov 1 00:21:08.198577 containerd[1460]: time="2025-11-01T00:21:08.196993038Z" level=info msg="StartContainer for \"5582d66259535f40db918f381babfe909d8c9a49b487b067ca2f9d106f44a2e9\"" Nov 1 00:21:08.200842 containerd[1460]: time="2025-11-01T00:21:08.200730054Z" level=info msg="CreateContainer within sandbox \"276a57525d661387d75fd19f74f477919c3fb6f260bedb00551df9c58ecad7f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:21:08.226571 containerd[1460]: time="2025-11-01T00:21:08.226503614Z" level=info msg="CreateContainer within sandbox \"276a57525d661387d75fd19f74f477919c3fb6f260bedb00551df9c58ecad7f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70e483be55b0e168428a48943a5b7e6e43b5b671daa98122c848cb6a56e379bc\"" Nov 1 00:21:08.231306 containerd[1460]: time="2025-11-01T00:21:08.231257597Z" level=info msg="StartContainer for \"70e483be55b0e168428a48943a5b7e6e43b5b671daa98122c848cb6a56e379bc\"" Nov 1 00:21:08.238869 systemd[1]: Started cri-containerd-166b76239a29ed0568c57634efa33e8e85cada8ddca8fae8d0b133d634939e8c.scope - libcontainer container 166b76239a29ed0568c57634efa33e8e85cada8ddca8fae8d0b133d634939e8c. Nov 1 00:21:08.257543 kubelet[2214]: E1101 00:21:08.257475 2214 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:21:08.263463 kubelet[2214]: E1101 00:21:08.263416 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9?timeout=10s\": dial tcp 10.128.0.8:6443: connect: connection refused" interval="1.6s" Nov 1 00:21:08.286817 systemd[1]: Started cri-containerd-5582d66259535f40db918f381babfe909d8c9a49b487b067ca2f9d106f44a2e9.scope - libcontainer container 5582d66259535f40db918f381babfe909d8c9a49b487b067ca2f9d106f44a2e9. Nov 1 00:21:08.306879 systemd[1]: Started cri-containerd-70e483be55b0e168428a48943a5b7e6e43b5b671daa98122c848cb6a56e379bc.scope - libcontainer container 70e483be55b0e168428a48943a5b7e6e43b5b671daa98122c848cb6a56e379bc. Nov 1 00:21:08.364625 containerd[1460]: time="2025-11-01T00:21:08.364049682Z" level=info msg="StartContainer for \"166b76239a29ed0568c57634efa33e8e85cada8ddca8fae8d0b133d634939e8c\" returns successfully" Nov 1 00:21:08.436953 containerd[1460]: time="2025-11-01T00:21:08.436086569Z" level=info msg="StartContainer for \"5582d66259535f40db918f381babfe909d8c9a49b487b067ca2f9d106f44a2e9\" returns successfully" Nov 1 00:21:08.452883 containerd[1460]: time="2025-11-01T00:21:08.452555723Z" level=info msg="StartContainer for \"70e483be55b0e168428a48943a5b7e6e43b5b671daa98122c848cb6a56e379bc\" returns successfully" Nov 1 00:21:08.480146 kubelet[2214]: I1101 00:21:08.480095 2214 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:08.918012 kubelet[2214]: E1101 00:21:08.917868 2214 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:08.918406 kubelet[2214]: E1101 00:21:08.918377 2214 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:08.919612 kubelet[2214]: E1101 00:21:08.919553 2214 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:09.925645 kubelet[2214]: E1101 00:21:09.925283 2214 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:09.927804 kubelet[2214]: E1101 00:21:09.927581 2214 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:10.824651 kubelet[2214]: E1101 00:21:10.823388 2214 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:11.659283 kubelet[2214]: E1101 00:21:11.659220 2214 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" not found" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:11.807729 kubelet[2214]: I1101 00:21:11.807426 2214 apiserver.go:52] "Watching apiserver" Nov 1 00:21:11.823621 kubelet[2214]: I1101 00:21:11.822834 2214 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:11.857931 kubelet[2214]: I1101 00:21:11.856775 2214 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:11.860803 kubelet[2214]: I1101 00:21:11.860765 2214 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:21:11.893733 kubelet[2214]: E1101 00:21:11.893672 2214 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:11.893962 kubelet[2214]: I1101 00:21:11.893944 2214 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:11.906004 kubelet[2214]: E1101 00:21:11.905959 2214 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:11.906230 kubelet[2214]: I1101 00:21:11.906211 2214 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:11.915022 kubelet[2214]: E1101 00:21:11.914881 2214 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:13.287199 update_engine[1449]: I20251101 00:21:13.287089 1449 update_attempter.cc:509] Updating boot flags... Nov 1 00:21:13.368385 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2506) Nov 1 00:21:13.547141 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2509) Nov 1 00:21:13.616851 kubelet[2214]: I1101 00:21:13.614273 2214 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:13.633898 kubelet[2214]: I1101 00:21:13.633850 2214 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 1 00:21:13.775839 systemd[1]: Reloading requested from client PID 2516 ('systemctl') (unit session-9.scope)... Nov 1 00:21:13.775861 systemd[1]: Reloading... Nov 1 00:21:13.941649 zram_generator::config[2562]: No configuration found. Nov 1 00:21:14.073446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:14.275885 systemd[1]: Reloading finished in 499 ms. Nov 1 00:21:14.329855 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:14.343982 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:21:14.344431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:14.344613 systemd[1]: kubelet.service: Consumed 1.138s CPU time, 125.4M memory peak, 0B memory swap peak. Nov 1 00:21:14.351312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:14.696823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:14.710207 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:21:14.796862 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:21:14.796862 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:14.796862 kubelet[2603]: I1101 00:21:14.795387 2603 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:21:14.806292 kubelet[2603]: I1101 00:21:14.806251 2603 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:21:14.806505 kubelet[2603]: I1101 00:21:14.806489 2603 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:21:14.806680 kubelet[2603]: I1101 00:21:14.806634 2603 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:21:14.806680 kubelet[2603]: I1101 00:21:14.806655 2603 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:21:14.807122 kubelet[2603]: I1101 00:21:14.807095 2603 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:21:14.809904 kubelet[2603]: I1101 00:21:14.808655 2603 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:21:14.812532 kubelet[2603]: I1101 00:21:14.812496 2603 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:21:14.823886 kubelet[2603]: E1101 00:21:14.823552 2603 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:21:14.823886 kubelet[2603]: I1101 00:21:14.823663 2603 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:21:14.827813 kubelet[2603]: I1101 00:21:14.827788 2603 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:21:14.828313 kubelet[2603]: I1101 00:21:14.828270 2603 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:21:14.828674 kubelet[2603]: I1101 00:21:14.828417 2603 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:21:14.829653 kubelet[2603]: I1101 00:21:14.828884 2603 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:21:14.829653 kubelet[2603]: I1101 00:21:14.828904 2603 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:21:14.829653 kubelet[2603]: I1101 00:21:14.828937 2603 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:21:14.830340 kubelet[2603]: I1101 00:21:14.830316 2603 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:14.830713 kubelet[2603]: I1101 00:21:14.830692 2603 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:21:14.830838 kubelet[2603]: I1101 00:21:14.830823 2603 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:21:14.830948 kubelet[2603]: I1101 00:21:14.830935 2603 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:21:14.831063 kubelet[2603]: I1101 00:21:14.831048 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:21:14.835312 kubelet[2603]: I1101 00:21:14.835287 2603 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:21:14.836395 kubelet[2603]: I1101 00:21:14.836334 2603 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:21:14.836556 kubelet[2603]: I1101 00:21:14.836538 2603 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:21:14.872346 kubelet[2603]: I1101 00:21:14.872307 2603 server.go:1262] "Started kubelet" Nov 1 00:21:14.874577 kubelet[2603]: I1101 00:21:14.874523 2603 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:21:14.875503 kubelet[2603]: I1101 00:21:14.875429 2603 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:21:14.880472 kubelet[2603]: I1101 00:21:14.880150 2603 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:21:14.881535 kubelet[2603]: I1101 00:21:14.877808 2603 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:21:14.888680 kubelet[2603]: I1101 00:21:14.879034 2603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:21:14.888680 kubelet[2603]: I1101 00:21:14.878885 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:21:14.891063 kubelet[2603]: I1101 00:21:14.891037 2603 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:21:14.896952 kubelet[2603]: I1101 00:21:14.883377 2603 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:21:14.897336 kubelet[2603]: I1101 00:21:14.897079 2603 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:21:14.897336 kubelet[2603]: I1101 00:21:14.897448 2603 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:21:14.903969 kubelet[2603]: I1101 00:21:14.903927 2603 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:21:14.904129 kubelet[2603]: I1101 00:21:14.904053 2603 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:21:14.915627 kubelet[2603]: E1101 00:21:14.915192 2603 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:21:14.916871 kubelet[2603]: I1101 00:21:14.916577 2603 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:21:14.950844 kubelet[2603]: I1101 00:21:14.950681 2603 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:21:14.960572 kubelet[2603]: I1101 00:21:14.959790 2603 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:21:14.960572 kubelet[2603]: I1101 00:21:14.959827 2603 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:21:14.960572 kubelet[2603]: I1101 00:21:14.959861 2603 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:21:14.960572 kubelet[2603]: E1101 00:21:14.959923 2603 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:21:15.030471 kubelet[2603]: I1101 00:21:15.030433 2603 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:21:15.030471 kubelet[2603]: I1101 00:21:15.030457 2603 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:21:15.030471 kubelet[2603]: I1101 00:21:15.030486 2603 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:15.032250 kubelet[2603]: I1101 00:21:15.032209 2603 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:21:15.032454 kubelet[2603]: I1101 00:21:15.032245 2603 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:21:15.032454 kubelet[2603]: I1101 00:21:15.032282 2603 policy_none.go:49] "None policy: Start" Nov 1 00:21:15.032454 kubelet[2603]: I1101 00:21:15.032298 2603 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:21:15.032454 kubelet[2603]: I1101 00:21:15.032318 2603 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:21:15.032808 kubelet[2603]: I1101 00:21:15.032494 2603 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:21:15.032808 kubelet[2603]: I1101 00:21:15.032509 2603 policy_none.go:47] "Start" Nov 1 00:21:15.045129 kubelet[2603]: E1101 00:21:15.043472 2603 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:21:15.048464 kubelet[2603]: I1101 00:21:15.046288 2603 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:21:15.048464 kubelet[2603]: I1101 00:21:15.046318 2603 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:21:15.048464 kubelet[2603]: I1101 00:21:15.046881 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:21:15.053208 kubelet[2603]: E1101 00:21:15.053128 2603 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:21:15.061033 kubelet[2603]: I1101 00:21:15.060987 2603 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.061971 kubelet[2603]: I1101 00:21:15.061945 2603 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.062268 kubelet[2603]: I1101 00:21:15.062246 2603 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.078536 kubelet[2603]: I1101 00:21:15.078428 2603 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 1 00:21:15.079141 kubelet[2603]: I1101 00:21:15.078786 2603 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 1 00:21:15.081603 kubelet[2603]: I1101 00:21:15.081375 2603 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 1 00:21:15.081603 kubelet[2603]: E1101 00:21:15.081470 2603 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.100659 kubelet[2603]: I1101 00:21:15.098663 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/148ff777142ac6f17d9c431b885bf305-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"148ff777142ac6f17d9c431b885bf305\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.100659 kubelet[2603]: I1101 00:21:15.098713 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.100659 kubelet[2603]: I1101 00:21:15.099281 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.100659 kubelet[2603]: I1101 00:21:15.099324 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.100909 kubelet[2603]: I1101 00:21:15.099393 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6043162b08e884c0a4c91453d7d4467f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"6043162b08e884c0a4c91453d7d4467f\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.100909 kubelet[2603]: I1101 00:21:15.099423 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/148ff777142ac6f17d9c431b885bf305-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"148ff777142ac6f17d9c431b885bf305\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.100909 kubelet[2603]: I1101 00:21:15.099455 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/148ff777142ac6f17d9c431b885bf305-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"148ff777142ac6f17d9c431b885bf305\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.100909 kubelet[2603]: I1101 00:21:15.099504 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.101035 kubelet[2603]: I1101 00:21:15.099539 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50e2a0b4dc0a7a5984c9ec4dce856790-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" (UID: \"50e2a0b4dc0a7a5984c9ec4dce856790\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.173340 kubelet[2603]: I1101 00:21:15.173284 2603 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.182250 kubelet[2603]: I1101 00:21:15.182204 2603 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.182408 kubelet[2603]: I1101 00:21:15.182316 2603 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:21:15.832900 kubelet[2603]: I1101 00:21:15.832394 2603 apiserver.go:52] "Watching apiserver" Nov 1 00:21:15.897256 kubelet[2603]: I1101 00:21:15.897207 2603 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:21:15.908088 kubelet[2603]: I1101 00:21:15.908008 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" podStartSLOduration=0.907986241 podStartE2EDuration="907.986241ms" podCreationTimestamp="2025-11-01 00:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:15.907573052 +0000 UTC m=+1.190582922" watchObservedRunningTime="2025-11-01 00:21:15.907986241 +0000 UTC m=+1.190996110" Nov 1 00:21:15.935125 kubelet[2603]: I1101 00:21:15.935048 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" podStartSLOduration=2.935025693 podStartE2EDuration="2.935025693s" podCreationTimestamp="2025-11-01 00:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:15.923559456 +0000 UTC m=+1.206569324" watchObservedRunningTime="2025-11-01 00:21:15.935025693 +0000 UTC m=+1.218035564" Nov 1 00:21:15.953686 kubelet[2603]: I1101 00:21:15.953007 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" podStartSLOduration=0.952979713 podStartE2EDuration="952.979713ms" podCreationTimestamp="2025-11-01 00:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:15.935346135 +0000 UTC m=+1.218356003" watchObservedRunningTime="2025-11-01 00:21:15.952979713 +0000 UTC m=+1.235989572" Nov 1 00:21:19.159114 kubelet[2603]: I1101 00:21:19.159061 2603 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:21:19.159851 containerd[1460]: time="2025-11-01T00:21:19.159805599Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:21:19.160282 kubelet[2603]: I1101 00:21:19.160132 2603 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:21:19.897651 systemd[1]: Created slice kubepods-besteffort-poda5ff7fef_bc35_4914_8806_13c66738d587.slice - libcontainer container kubepods-besteffort-poda5ff7fef_bc35_4914_8806_13c66738d587.slice. Nov 1 00:21:19.930371 kubelet[2603]: I1101 00:21:19.930312 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5ff7fef-bc35-4914-8806-13c66738d587-kube-proxy\") pod \"kube-proxy-ltv7t\" (UID: \"a5ff7fef-bc35-4914-8806-13c66738d587\") " pod="kube-system/kube-proxy-ltv7t" Nov 1 00:21:19.930371 kubelet[2603]: I1101 00:21:19.930375 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5ff7fef-bc35-4914-8806-13c66738d587-xtables-lock\") pod \"kube-proxy-ltv7t\" (UID: \"a5ff7fef-bc35-4914-8806-13c66738d587\") " pod="kube-system/kube-proxy-ltv7t" Nov 1 00:21:19.930637 kubelet[2603]: I1101 00:21:19.930400 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7clfr\" (UniqueName: \"kubernetes.io/projected/a5ff7fef-bc35-4914-8806-13c66738d587-kube-api-access-7clfr\") pod \"kube-proxy-ltv7t\" (UID: \"a5ff7fef-bc35-4914-8806-13c66738d587\") " pod="kube-system/kube-proxy-ltv7t" Nov 1 00:21:19.930637 kubelet[2603]: I1101 00:21:19.930428 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ff7fef-bc35-4914-8806-13c66738d587-lib-modules\") pod \"kube-proxy-ltv7t\" (UID: \"a5ff7fef-bc35-4914-8806-13c66738d587\") " pod="kube-system/kube-proxy-ltv7t" Nov 1 00:21:20.037513 kubelet[2603]: E1101 00:21:20.037470 2603 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:21:20.037513 kubelet[2603]: E1101 00:21:20.037510 2603 projected.go:196] Error preparing data for projected volume kube-api-access-7clfr for pod kube-system/kube-proxy-ltv7t: configmap "kube-root-ca.crt" not found Nov 1 00:21:20.037862 kubelet[2603]: E1101 00:21:20.037663 2603 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5ff7fef-bc35-4914-8806-13c66738d587-kube-api-access-7clfr podName:a5ff7fef-bc35-4914-8806-13c66738d587 nodeName:}" failed. No retries permitted until 2025-11-01 00:21:20.537623677 +0000 UTC m=+5.820633536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7clfr" (UniqueName: "kubernetes.io/projected/a5ff7fef-bc35-4914-8806-13c66738d587-kube-api-access-7clfr") pod "kube-proxy-ltv7t" (UID: "a5ff7fef-bc35-4914-8806-13c66738d587") : configmap "kube-root-ca.crt" not found Nov 1 00:21:20.330214 systemd[1]: Created slice kubepods-besteffort-pod370a0e6d_c369_4715_a21a_64d2b01d7cea.slice - libcontainer container kubepods-besteffort-pod370a0e6d_c369_4715_a21a_64d2b01d7cea.slice. Nov 1 00:21:20.333963 kubelet[2603]: I1101 00:21:20.333926 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/370a0e6d-c369-4715-a21a-64d2b01d7cea-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-xw4z7\" (UID: \"370a0e6d-c369-4715-a21a-64d2b01d7cea\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xw4z7" Nov 1 00:21:20.334498 kubelet[2603]: I1101 00:21:20.333980 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwcdp\" (UniqueName: \"kubernetes.io/projected/370a0e6d-c369-4715-a21a-64d2b01d7cea-kube-api-access-pwcdp\") pod \"tigera-operator-65cdcdfd6d-xw4z7\" (UID: \"370a0e6d-c369-4715-a21a-64d2b01d7cea\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xw4z7" Nov 1 00:21:20.643978 containerd[1460]: time="2025-11-01T00:21:20.643929220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xw4z7,Uid:370a0e6d-c369-4715-a21a-64d2b01d7cea,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:21:20.685980 containerd[1460]: time="2025-11-01T00:21:20.685817470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:20.685980 containerd[1460]: time="2025-11-01T00:21:20.685882765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:20.685980 containerd[1460]: time="2025-11-01T00:21:20.685900307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:20.687492 containerd[1460]: time="2025-11-01T00:21:20.686006026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:20.720977 systemd[1]: Started cri-containerd-1abfb0cf49a90f6f5f5cafc1cc3871074d9f64fbba54ce57eda946440fc8a5c0.scope - libcontainer container 1abfb0cf49a90f6f5f5cafc1cc3871074d9f64fbba54ce57eda946440fc8a5c0. Nov 1 00:21:20.782956 containerd[1460]: time="2025-11-01T00:21:20.782724062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xw4z7,Uid:370a0e6d-c369-4715-a21a-64d2b01d7cea,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1abfb0cf49a90f6f5f5cafc1cc3871074d9f64fbba54ce57eda946440fc8a5c0\"" Nov 1 00:21:20.786122 containerd[1460]: time="2025-11-01T00:21:20.786053510Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:21:20.813153 containerd[1460]: time="2025-11-01T00:21:20.813099622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ltv7t,Uid:a5ff7fef-bc35-4914-8806-13c66738d587,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:20.847468 containerd[1460]: time="2025-11-01T00:21:20.847065184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:20.847468 containerd[1460]: time="2025-11-01T00:21:20.847204461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:20.847468 containerd[1460]: time="2025-11-01T00:21:20.847240511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:20.847468 containerd[1460]: time="2025-11-01T00:21:20.847391519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:20.876973 systemd[1]: Started cri-containerd-aa3117b01010ca0e165de3fe00784bc2d38e4a47f58284c010e57c4125199b36.scope - libcontainer container aa3117b01010ca0e165de3fe00784bc2d38e4a47f58284c010e57c4125199b36. Nov 1 00:21:20.916904 containerd[1460]: time="2025-11-01T00:21:20.916682956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ltv7t,Uid:a5ff7fef-bc35-4914-8806-13c66738d587,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa3117b01010ca0e165de3fe00784bc2d38e4a47f58284c010e57c4125199b36\"" Nov 1 00:21:20.927158 containerd[1460]: time="2025-11-01T00:21:20.927105812Z" level=info msg="CreateContainer within sandbox \"aa3117b01010ca0e165de3fe00784bc2d38e4a47f58284c010e57c4125199b36\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:21:20.953769 containerd[1460]: time="2025-11-01T00:21:20.953699806Z" level=info msg="CreateContainer within sandbox \"aa3117b01010ca0e165de3fe00784bc2d38e4a47f58284c010e57c4125199b36\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d157b8a6533b723e9a18c738d5deb3eb4806dd724fc4cf21f6fa4547ce01c568\"" Nov 1 00:21:20.956643 containerd[1460]: time="2025-11-01T00:21:20.955709168Z" level=info msg="StartContainer for \"d157b8a6533b723e9a18c738d5deb3eb4806dd724fc4cf21f6fa4547ce01c568\"" Nov 1 00:21:20.997841 systemd[1]: Started cri-containerd-d157b8a6533b723e9a18c738d5deb3eb4806dd724fc4cf21f6fa4547ce01c568.scope - libcontainer container d157b8a6533b723e9a18c738d5deb3eb4806dd724fc4cf21f6fa4547ce01c568. Nov 1 00:21:21.059960 containerd[1460]: time="2025-11-01T00:21:21.059859898Z" level=info msg="StartContainer for \"d157b8a6533b723e9a18c738d5deb3eb4806dd724fc4cf21f6fa4547ce01c568\" returns successfully" Nov 1 00:21:22.042174 kubelet[2603]: I1101 00:21:22.041435 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ltv7t" podStartSLOduration=3.041415868 podStartE2EDuration="3.041415868s" podCreationTimestamp="2025-11-01 00:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:22.041118839 +0000 UTC m=+7.324128709" watchObservedRunningTime="2025-11-01 00:21:22.041415868 +0000 UTC m=+7.324425737" Nov 1 00:21:22.395638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392263251.mount: Deactivated successfully. Nov 1 00:21:23.440759 containerd[1460]: time="2025-11-01T00:21:23.440686627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:23.442482 containerd[1460]: time="2025-11-01T00:21:23.442275732Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:21:23.445693 containerd[1460]: time="2025-11-01T00:21:23.444059456Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:23.447813 containerd[1460]: time="2025-11-01T00:21:23.447768161Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:23.448958 containerd[1460]: time="2025-11-01T00:21:23.448909991Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.662801555s" Nov 1 00:21:23.449140 containerd[1460]: time="2025-11-01T00:21:23.448963815Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:21:23.454935 containerd[1460]: time="2025-11-01T00:21:23.454879261Z" level=info msg="CreateContainer within sandbox \"1abfb0cf49a90f6f5f5cafc1cc3871074d9f64fbba54ce57eda946440fc8a5c0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:21:23.474930 containerd[1460]: time="2025-11-01T00:21:23.474868361Z" level=info msg="CreateContainer within sandbox \"1abfb0cf49a90f6f5f5cafc1cc3871074d9f64fbba54ce57eda946440fc8a5c0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7e1fb3c7c8a79965707f580564cd3ba68d8399eafddfa5128200bff690070122\"" Nov 1 00:21:23.476033 containerd[1460]: time="2025-11-01T00:21:23.475976002Z" level=info msg="StartContainer for \"7e1fb3c7c8a79965707f580564cd3ba68d8399eafddfa5128200bff690070122\"" Nov 1 00:21:23.525876 systemd[1]: Started cri-containerd-7e1fb3c7c8a79965707f580564cd3ba68d8399eafddfa5128200bff690070122.scope - libcontainer container 7e1fb3c7c8a79965707f580564cd3ba68d8399eafddfa5128200bff690070122. Nov 1 00:21:23.566899 containerd[1460]: time="2025-11-01T00:21:23.566616197Z" level=info msg="StartContainer for \"7e1fb3c7c8a79965707f580564cd3ba68d8399eafddfa5128200bff690070122\" returns successfully" Nov 1 00:21:24.905329 kubelet[2603]: I1101 00:21:24.905228 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-xw4z7" podStartSLOduration=2.239282374 podStartE2EDuration="4.905203324s" podCreationTimestamp="2025-11-01 00:21:20 +0000 UTC" firstStartedPulling="2025-11-01 00:21:20.784848574 +0000 UTC m=+6.067858429" lastFinishedPulling="2025-11-01 00:21:23.450769518 +0000 UTC m=+8.733779379" observedRunningTime="2025-11-01 00:21:24.037167809 +0000 UTC m=+9.320177695" watchObservedRunningTime="2025-11-01 00:21:24.905203324 +0000 UTC m=+10.188213196" Nov 1 00:21:30.934983 sudo[1737]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:30.981950 sshd[1734]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:30.988525 systemd[1]: sshd@8-10.128.0.8:22-147.75.109.163:60438.service: Deactivated successfully. Nov 1 00:21:30.992385 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:21:30.992830 systemd[1]: session-9.scope: Consumed 8.587s CPU time, 160.0M memory peak, 0B memory swap peak. Nov 1 00:21:30.996120 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:21:31.001063 systemd-logind[1442]: Removed session 9. Nov 1 00:21:38.513530 systemd[1]: Created slice kubepods-besteffort-pod4a174d77_730d_4b33_b712_9df94aaf3a6b.slice - libcontainer container kubepods-besteffort-pod4a174d77_730d_4b33_b712_9df94aaf3a6b.slice. Nov 1 00:21:38.569651 kubelet[2603]: I1101 00:21:38.569250 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a174d77-730d-4b33-b712-9df94aaf3a6b-tigera-ca-bundle\") pod \"calico-typha-6cd6bccf56-lcdlv\" (UID: \"4a174d77-730d-4b33-b712-9df94aaf3a6b\") " pod="calico-system/calico-typha-6cd6bccf56-lcdlv" Nov 1 00:21:38.569651 kubelet[2603]: I1101 00:21:38.569419 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4a174d77-730d-4b33-b712-9df94aaf3a6b-typha-certs\") pod \"calico-typha-6cd6bccf56-lcdlv\" (UID: \"4a174d77-730d-4b33-b712-9df94aaf3a6b\") " pod="calico-system/calico-typha-6cd6bccf56-lcdlv" Nov 1 00:21:38.569651 kubelet[2603]: I1101 00:21:38.569570 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drqz6\" (UniqueName: \"kubernetes.io/projected/4a174d77-730d-4b33-b712-9df94aaf3a6b-kube-api-access-drqz6\") pod \"calico-typha-6cd6bccf56-lcdlv\" (UID: \"4a174d77-730d-4b33-b712-9df94aaf3a6b\") " pod="calico-system/calico-typha-6cd6bccf56-lcdlv" Nov 1 00:21:38.707877 systemd[1]: Created slice kubepods-besteffort-podff251466_ad5c_4987_a79d_7a94b6a8c196.slice - libcontainer container kubepods-besteffort-podff251466_ad5c_4987_a79d_7a94b6a8c196.slice. Nov 1 00:21:38.770852 kubelet[2603]: I1101 00:21:38.770711 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ff251466-ad5c-4987-a79d-7a94b6a8c196-node-certs\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771573 kubelet[2603]: I1101 00:21:38.771072 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff251466-ad5c-4987-a79d-7a94b6a8c196-tigera-ca-bundle\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771573 kubelet[2603]: I1101 00:21:38.771158 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ff251466-ad5c-4987-a79d-7a94b6a8c196-cni-net-dir\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771573 kubelet[2603]: I1101 00:21:38.771195 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ff251466-ad5c-4987-a79d-7a94b6a8c196-flexvol-driver-host\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771573 kubelet[2603]: I1101 00:21:38.771226 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ff251466-ad5c-4987-a79d-7a94b6a8c196-cni-log-dir\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771573 kubelet[2603]: I1101 00:21:38.771250 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ff251466-ad5c-4987-a79d-7a94b6a8c196-var-run-calico\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771918 kubelet[2603]: I1101 00:21:38.771279 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff251466-ad5c-4987-a79d-7a94b6a8c196-lib-modules\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771918 kubelet[2603]: I1101 00:21:38.771305 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff251466-ad5c-4987-a79d-7a94b6a8c196-xtables-lock\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771918 kubelet[2603]: I1101 00:21:38.771329 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff251466-ad5c-4987-a79d-7a94b6a8c196-var-lib-calico\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771918 kubelet[2603]: I1101 00:21:38.771359 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ff251466-ad5c-4987-a79d-7a94b6a8c196-cni-bin-dir\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.771918 kubelet[2603]: I1101 00:21:38.771385 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ff251466-ad5c-4987-a79d-7a94b6a8c196-policysync\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.772240 kubelet[2603]: I1101 00:21:38.771409 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5td\" (UniqueName: \"kubernetes.io/projected/ff251466-ad5c-4987-a79d-7a94b6a8c196-kube-api-access-gf5td\") pod \"calico-node-498p2\" (UID: \"ff251466-ad5c-4987-a79d-7a94b6a8c196\") " pod="calico-system/calico-node-498p2" Nov 1 00:21:38.840174 containerd[1460]: time="2025-11-01T00:21:38.839489536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cd6bccf56-lcdlv,Uid:4a174d77-730d-4b33-b712-9df94aaf3a6b,Namespace:calico-system,Attempt:0,}" Nov 1 00:21:38.900349 kubelet[2603]: E1101 00:21:38.897012 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.900349 kubelet[2603]: W1101 00:21:38.897068 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.900349 kubelet[2603]: E1101 00:21:38.897122 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.918090 kubelet[2603]: E1101 00:21:38.916763 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.918090 kubelet[2603]: W1101 00:21:38.916849 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.918090 kubelet[2603]: E1101 00:21:38.916883 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.937151 containerd[1460]: time="2025-11-01T00:21:38.935522288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:38.937151 containerd[1460]: time="2025-11-01T00:21:38.935702346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:38.937151 containerd[1460]: time="2025-11-01T00:21:38.935723418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:38.937151 containerd[1460]: time="2025-11-01T00:21:38.935994348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:38.938204 kubelet[2603]: E1101 00:21:38.938143 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:21:38.946232 kubelet[2603]: E1101 00:21:38.946195 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.946232 kubelet[2603]: W1101 00:21:38.946229 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.946468 kubelet[2603]: E1101 00:21:38.946257 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.947627 kubelet[2603]: E1101 00:21:38.946927 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.947627 kubelet[2603]: W1101 00:21:38.946951 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.947627 kubelet[2603]: E1101 00:21:38.946971 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.948156 kubelet[2603]: E1101 00:21:38.948007 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.948156 kubelet[2603]: W1101 00:21:38.948024 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.948156 kubelet[2603]: E1101 00:21:38.948058 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.949668 kubelet[2603]: E1101 00:21:38.949000 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.949668 kubelet[2603]: W1101 00:21:38.949023 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.949668 kubelet[2603]: E1101 00:21:38.949044 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.950624 kubelet[2603]: E1101 00:21:38.950219 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.950624 kubelet[2603]: W1101 00:21:38.950239 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.950624 kubelet[2603]: E1101 00:21:38.950257 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.951954 kubelet[2603]: E1101 00:21:38.951677 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.951954 kubelet[2603]: W1101 00:21:38.951739 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.951954 kubelet[2603]: E1101 00:21:38.951768 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.953617 kubelet[2603]: E1101 00:21:38.953240 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.953617 kubelet[2603]: W1101 00:21:38.953259 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.953617 kubelet[2603]: E1101 00:21:38.953287 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.954609 kubelet[2603]: E1101 00:21:38.953911 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.954609 kubelet[2603]: W1101 00:21:38.953935 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.954609 kubelet[2603]: E1101 00:21:38.953953 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.954609 kubelet[2603]: E1101 00:21:38.954534 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.954609 kubelet[2603]: W1101 00:21:38.954548 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.954609 kubelet[2603]: E1101 00:21:38.954564 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.955613 kubelet[2603]: E1101 00:21:38.955199 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.955613 kubelet[2603]: W1101 00:21:38.955219 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.955613 kubelet[2603]: E1101 00:21:38.955235 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.956611 kubelet[2603]: E1101 00:21:38.955852 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.956611 kubelet[2603]: W1101 00:21:38.955873 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.956611 kubelet[2603]: E1101 00:21:38.955891 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.956611 kubelet[2603]: E1101 00:21:38.956489 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.956611 kubelet[2603]: W1101 00:21:38.956505 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.956611 kubelet[2603]: E1101 00:21:38.956522 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.957617 kubelet[2603]: E1101 00:21:38.957170 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.957617 kubelet[2603]: W1101 00:21:38.957189 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.957617 kubelet[2603]: E1101 00:21:38.957206 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.957818 kubelet[2603]: E1101 00:21:38.957668 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.957818 kubelet[2603]: W1101 00:21:38.957683 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.957818 kubelet[2603]: E1101 00:21:38.957699 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.959094 kubelet[2603]: E1101 00:21:38.958314 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.959094 kubelet[2603]: W1101 00:21:38.958333 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.959094 kubelet[2603]: E1101 00:21:38.958349 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.960620 kubelet[2603]: E1101 00:21:38.959843 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.960620 kubelet[2603]: W1101 00:21:38.959866 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.960620 kubelet[2603]: E1101 00:21:38.959884 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.962670 kubelet[2603]: E1101 00:21:38.962641 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.962670 kubelet[2603]: W1101 00:21:38.962666 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.962832 kubelet[2603]: E1101 00:21:38.962684 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.965343 kubelet[2603]: E1101 00:21:38.964754 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.965343 kubelet[2603]: W1101 00:21:38.964774 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.965343 kubelet[2603]: E1101 00:21:38.964790 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.965343 kubelet[2603]: E1101 00:21:38.965121 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.965343 kubelet[2603]: W1101 00:21:38.965136 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.965343 kubelet[2603]: E1101 00:21:38.965152 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.965709 kubelet[2603]: E1101 00:21:38.965459 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.965709 kubelet[2603]: W1101 00:21:38.965473 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.965709 kubelet[2603]: E1101 00:21:38.965491 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.977561 kubelet[2603]: E1101 00:21:38.977206 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.977561 kubelet[2603]: W1101 00:21:38.977233 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.977561 kubelet[2603]: E1101 00:21:38.977257 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.977561 kubelet[2603]: I1101 00:21:38.977298 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a13cec52-774e-41dd-8b73-7a0c3559c1e0-kubelet-dir\") pod \"csi-node-driver-cvqzr\" (UID: \"a13cec52-774e-41dd-8b73-7a0c3559c1e0\") " pod="calico-system/csi-node-driver-cvqzr" Nov 1 00:21:38.978772 kubelet[2603]: E1101 00:21:38.978742 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.978772 kubelet[2603]: W1101 00:21:38.978770 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.979299 kubelet[2603]: E1101 00:21:38.978792 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.979299 kubelet[2603]: I1101 00:21:38.978862 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a13cec52-774e-41dd-8b73-7a0c3559c1e0-varrun\") pod \"csi-node-driver-cvqzr\" (UID: \"a13cec52-774e-41dd-8b73-7a0c3559c1e0\") " pod="calico-system/csi-node-driver-cvqzr" Nov 1 00:21:38.979299 kubelet[2603]: E1101 00:21:38.980634 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.979299 kubelet[2603]: W1101 00:21:38.980650 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.980952 kubelet[2603]: E1101 00:21:38.980671 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.983901 kubelet[2603]: E1101 00:21:38.983427 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.983901 kubelet[2603]: W1101 00:21:38.983448 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.983901 kubelet[2603]: E1101 00:21:38.983616 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.986432 kubelet[2603]: E1101 00:21:38.985657 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.986432 kubelet[2603]: W1101 00:21:38.985678 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.986432 kubelet[2603]: E1101 00:21:38.985698 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.989623 kubelet[2603]: E1101 00:21:38.989374 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.991025 kubelet[2603]: W1101 00:21:38.990427 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.991437 kubelet[2603]: E1101 00:21:38.991225 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.991437 kubelet[2603]: I1101 00:21:38.990049 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k66tj\" (UniqueName: \"kubernetes.io/projected/a13cec52-774e-41dd-8b73-7a0c3559c1e0-kube-api-access-k66tj\") pod \"csi-node-driver-cvqzr\" (UID: \"a13cec52-774e-41dd-8b73-7a0c3559c1e0\") " pod="calico-system/csi-node-driver-cvqzr" Nov 1 00:21:38.994978 kubelet[2603]: E1101 00:21:38.994300 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.995303 kubelet[2603]: W1101 00:21:38.995116 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.995303 kubelet[2603]: E1101 00:21:38.995154 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.995303 kubelet[2603]: I1101 00:21:38.995207 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a13cec52-774e-41dd-8b73-7a0c3559c1e0-registration-dir\") pod \"csi-node-driver-cvqzr\" (UID: \"a13cec52-774e-41dd-8b73-7a0c3559c1e0\") " pod="calico-system/csi-node-driver-cvqzr" Nov 1 00:21:38.996042 kubelet[2603]: E1101 00:21:38.995965 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.996042 kubelet[2603]: W1101 00:21:38.995998 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.996042 kubelet[2603]: E1101 00:21:38.996018 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.997133 kubelet[2603]: E1101 00:21:38.997043 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.997340 kubelet[2603]: W1101 00:21:38.997146 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.997340 kubelet[2603]: E1101 00:21:38.997173 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.999620 kubelet[2603]: E1101 00:21:38.998850 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:38.999620 kubelet[2603]: W1101 00:21:38.999025 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:38.999620 kubelet[2603]: E1101 00:21:38.999052 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:38.999381 systemd[1]: Started cri-containerd-83a8f10bb0db22187c9414e2ff7f24d2a16911ade6799983be1b1b98de035dbb.scope - libcontainer container 83a8f10bb0db22187c9414e2ff7f24d2a16911ade6799983be1b1b98de035dbb. Nov 1 00:21:39.004482 kubelet[2603]: E1101 00:21:39.004442 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.004636 kubelet[2603]: W1101 00:21:39.004530 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.004636 kubelet[2603]: E1101 00:21:39.004556 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.004944 kubelet[2603]: I1101 00:21:39.004705 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a13cec52-774e-41dd-8b73-7a0c3559c1e0-socket-dir\") pod \"csi-node-driver-cvqzr\" (UID: \"a13cec52-774e-41dd-8b73-7a0c3559c1e0\") " pod="calico-system/csi-node-driver-cvqzr" Nov 1 00:21:39.005659 kubelet[2603]: E1101 00:21:39.005623 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.006270 kubelet[2603]: W1101 00:21:39.005804 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.006270 kubelet[2603]: E1101 00:21:39.005829 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.007141 kubelet[2603]: E1101 00:21:39.006908 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.007141 kubelet[2603]: W1101 00:21:39.006930 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.007141 kubelet[2603]: E1101 00:21:39.006951 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.008184 kubelet[2603]: E1101 00:21:39.008004 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.008184 kubelet[2603]: W1101 00:21:39.008023 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.008184 kubelet[2603]: E1101 00:21:39.008043 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.009912 kubelet[2603]: E1101 00:21:39.009658 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.009912 kubelet[2603]: W1101 00:21:39.009680 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.009912 kubelet[2603]: E1101 00:21:39.009825 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.019682 containerd[1460]: time="2025-11-01T00:21:39.019635001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-498p2,Uid:ff251466-ad5c-4987-a79d-7a94b6a8c196,Namespace:calico-system,Attempt:0,}" Nov 1 00:21:39.096304 containerd[1460]: time="2025-11-01T00:21:39.095430350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:39.096482 containerd[1460]: time="2025-11-01T00:21:39.096350175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:39.097703 containerd[1460]: time="2025-11-01T00:21:39.097527244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:39.102854 containerd[1460]: time="2025-11-01T00:21:39.101799318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:39.111701 kubelet[2603]: E1101 00:21:39.111423 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.111701 kubelet[2603]: W1101 00:21:39.111449 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.111701 kubelet[2603]: E1101 00:21:39.111479 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.112367 kubelet[2603]: E1101 00:21:39.112065 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.112367 kubelet[2603]: W1101 00:21:39.112083 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.112367 kubelet[2603]: E1101 00:21:39.112106 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.114203 kubelet[2603]: E1101 00:21:39.113313 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.114203 kubelet[2603]: W1101 00:21:39.113333 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.114203 kubelet[2603]: E1101 00:21:39.113350 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.114203 kubelet[2603]: E1101 00:21:39.113926 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.114203 kubelet[2603]: W1101 00:21:39.113942 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.114203 kubelet[2603]: E1101 00:21:39.113979 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.115776 kubelet[2603]: E1101 00:21:39.115755 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.116010 kubelet[2603]: W1101 00:21:39.115891 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.116010 kubelet[2603]: E1101 00:21:39.115917 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.116576 kubelet[2603]: E1101 00:21:39.116558 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.116829 kubelet[2603]: W1101 00:21:39.116717 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.116829 kubelet[2603]: E1101 00:21:39.116745 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.118674 kubelet[2603]: E1101 00:21:39.118450 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.118674 kubelet[2603]: W1101 00:21:39.118471 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.118674 kubelet[2603]: E1101 00:21:39.118490 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.119802 kubelet[2603]: E1101 00:21:39.119575 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.119802 kubelet[2603]: W1101 00:21:39.119650 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.119802 kubelet[2603]: E1101 00:21:39.119671 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.120806 kubelet[2603]: E1101 00:21:39.120399 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.120806 kubelet[2603]: W1101 00:21:39.120419 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.120806 kubelet[2603]: E1101 00:21:39.120461 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.121448 kubelet[2603]: E1101 00:21:39.121294 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.121448 kubelet[2603]: W1101 00:21:39.121313 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.121448 kubelet[2603]: E1101 00:21:39.121331 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.122619 kubelet[2603]: E1101 00:21:39.122339 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.122619 kubelet[2603]: W1101 00:21:39.122359 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.122619 kubelet[2603]: E1101 00:21:39.122492 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.123923 kubelet[2603]: E1101 00:21:39.123698 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.123923 kubelet[2603]: W1101 00:21:39.123736 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.123923 kubelet[2603]: E1101 00:21:39.123755 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.124951 kubelet[2603]: E1101 00:21:39.124680 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.124951 kubelet[2603]: W1101 00:21:39.124699 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.124951 kubelet[2603]: E1101 00:21:39.124716 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.125900 kubelet[2603]: E1101 00:21:39.125500 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.125900 kubelet[2603]: W1101 00:21:39.125518 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.125900 kubelet[2603]: E1101 00:21:39.125620 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.127291 kubelet[2603]: E1101 00:21:39.126688 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.127291 kubelet[2603]: W1101 00:21:39.126919 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.127291 kubelet[2603]: E1101 00:21:39.126939 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.128281 kubelet[2603]: E1101 00:21:39.127997 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.128281 kubelet[2603]: W1101 00:21:39.128117 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.128281 kubelet[2603]: E1101 00:21:39.128142 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.129238 kubelet[2603]: E1101 00:21:39.129052 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.129238 kubelet[2603]: W1101 00:21:39.129074 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.129238 kubelet[2603]: E1101 00:21:39.129212 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.130673 kubelet[2603]: E1101 00:21:39.130357 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.130673 kubelet[2603]: W1101 00:21:39.130377 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.130673 kubelet[2603]: E1101 00:21:39.130395 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.133112 kubelet[2603]: E1101 00:21:39.132916 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.133112 kubelet[2603]: W1101 00:21:39.132936 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.133112 kubelet[2603]: E1101 00:21:39.132954 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.134024 kubelet[2603]: E1101 00:21:39.133970 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.134273 kubelet[2603]: W1101 00:21:39.133987 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.134273 kubelet[2603]: E1101 00:21:39.134161 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.135101 kubelet[2603]: E1101 00:21:39.134883 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.135101 kubelet[2603]: W1101 00:21:39.134901 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.135101 kubelet[2603]: E1101 00:21:39.134917 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.135675 kubelet[2603]: E1101 00:21:39.135539 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.135675 kubelet[2603]: W1101 00:21:39.135616 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.135984 kubelet[2603]: E1101 00:21:39.135637 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.136781 kubelet[2603]: E1101 00:21:39.136514 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.136781 kubelet[2603]: W1101 00:21:39.136534 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.136781 kubelet[2603]: E1101 00:21:39.136563 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.137895 kubelet[2603]: E1101 00:21:39.137872 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.138126 kubelet[2603]: W1101 00:21:39.138027 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.138126 kubelet[2603]: E1101 00:21:39.138060 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.138765 kubelet[2603]: E1101 00:21:39.138681 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.138765 kubelet[2603]: W1101 00:21:39.138700 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.138765 kubelet[2603]: E1101 00:21:39.138718 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.162458 kubelet[2603]: E1101 00:21:39.162287 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:39.162458 kubelet[2603]: W1101 00:21:39.162325 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:39.162458 kubelet[2603]: E1101 00:21:39.162354 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:39.167906 systemd[1]: Started cri-containerd-1fc5146b3f24ba2c3beed0601e3a893864918cfdb6782c63f3b2b8218d1421d7.scope - libcontainer container 1fc5146b3f24ba2c3beed0601e3a893864918cfdb6782c63f3b2b8218d1421d7. Nov 1 00:21:39.198793 containerd[1460]: time="2025-11-01T00:21:39.198740601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cd6bccf56-lcdlv,Uid:4a174d77-730d-4b33-b712-9df94aaf3a6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"83a8f10bb0db22187c9414e2ff7f24d2a16911ade6799983be1b1b98de035dbb\"" Nov 1 00:21:39.202680 containerd[1460]: time="2025-11-01T00:21:39.201903147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:21:39.238828 containerd[1460]: time="2025-11-01T00:21:39.238442776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-498p2,Uid:ff251466-ad5c-4987-a79d-7a94b6a8c196,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fc5146b3f24ba2c3beed0601e3a893864918cfdb6782c63f3b2b8218d1421d7\"" Nov 1 00:21:40.442109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529814789.mount: Deactivated successfully. Nov 1 00:21:40.962500 kubelet[2603]: E1101 00:21:40.962420 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:21:41.604539 containerd[1460]: time="2025-11-01T00:21:41.604469316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:41.606159 containerd[1460]: time="2025-11-01T00:21:41.605939056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:21:41.607710 containerd[1460]: time="2025-11-01T00:21:41.607652693Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:41.612236 containerd[1460]: time="2025-11-01T00:21:41.610944281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:41.612236 containerd[1460]: time="2025-11-01T00:21:41.612036454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.410081132s" Nov 1 00:21:41.612236 containerd[1460]: time="2025-11-01T00:21:41.612081820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:21:41.615109 containerd[1460]: time="2025-11-01T00:21:41.615072076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:21:41.640534 containerd[1460]: time="2025-11-01T00:21:41.640295767Z" level=info msg="CreateContainer within sandbox \"83a8f10bb0db22187c9414e2ff7f24d2a16911ade6799983be1b1b98de035dbb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:21:41.664218 containerd[1460]: time="2025-11-01T00:21:41.664143749Z" level=info msg="CreateContainer within sandbox \"83a8f10bb0db22187c9414e2ff7f24d2a16911ade6799983be1b1b98de035dbb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"036f675d5fa6705355f3e53ffb9c66c816e677fcbe6a735cd3d4960e0535f99a\"" Nov 1 00:21:41.665179 containerd[1460]: time="2025-11-01T00:21:41.665107200Z" level=info msg="StartContainer for \"036f675d5fa6705355f3e53ffb9c66c816e677fcbe6a735cd3d4960e0535f99a\"" Nov 1 00:21:41.721872 systemd[1]: Started cri-containerd-036f675d5fa6705355f3e53ffb9c66c816e677fcbe6a735cd3d4960e0535f99a.scope - libcontainer container 036f675d5fa6705355f3e53ffb9c66c816e677fcbe6a735cd3d4960e0535f99a. Nov 1 00:21:41.786571 containerd[1460]: time="2025-11-01T00:21:41.786518995Z" level=info msg="StartContainer for \"036f675d5fa6705355f3e53ffb9c66c816e677fcbe6a735cd3d4960e0535f99a\" returns successfully" Nov 1 00:21:42.187624 kubelet[2603]: E1101 00:21:42.187569 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.187624 kubelet[2603]: W1101 00:21:42.187616 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.188486 kubelet[2603]: E1101 00:21:42.187649 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.188486 kubelet[2603]: E1101 00:21:42.188014 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.188486 kubelet[2603]: W1101 00:21:42.188047 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.188486 kubelet[2603]: E1101 00:21:42.188068 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.189115 kubelet[2603]: E1101 00:21:42.189020 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.189115 kubelet[2603]: W1101 00:21:42.189041 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.189115 kubelet[2603]: E1101 00:21:42.189061 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.189770 kubelet[2603]: E1101 00:21:42.189695 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.189770 kubelet[2603]: W1101 00:21:42.189711 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.189770 kubelet[2603]: E1101 00:21:42.189729 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.190691 kubelet[2603]: E1101 00:21:42.190653 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.190691 kubelet[2603]: W1101 00:21:42.190679 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.191839 kubelet[2603]: E1101 00:21:42.190698 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.192723 kubelet[2603]: E1101 00:21:42.192693 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.192723 kubelet[2603]: W1101 00:21:42.192722 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.193105 kubelet[2603]: E1101 00:21:42.192743 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.193173 kubelet[2603]: E1101 00:21:42.193116 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.193173 kubelet[2603]: W1101 00:21:42.193131 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.193173 kubelet[2603]: E1101 00:21:42.193149 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.193577 kubelet[2603]: E1101 00:21:42.193551 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.193577 kubelet[2603]: W1101 00:21:42.193575 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.193797 kubelet[2603]: E1101 00:21:42.193607 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.194359 kubelet[2603]: E1101 00:21:42.194327 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.194359 kubelet[2603]: W1101 00:21:42.194350 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.194913 kubelet[2603]: E1101 00:21:42.194371 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.195276 kubelet[2603]: E1101 00:21:42.195233 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.195276 kubelet[2603]: W1101 00:21:42.195256 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.195276 kubelet[2603]: E1101 00:21:42.195273 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.195720 kubelet[2603]: E1101 00:21:42.195644 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.195720 kubelet[2603]: W1101 00:21:42.195659 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.195720 kubelet[2603]: E1101 00:21:42.195676 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.196714 kubelet[2603]: E1101 00:21:42.196688 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.196714 kubelet[2603]: W1101 00:21:42.196710 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.197191 kubelet[2603]: E1101 00:21:42.196728 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.197191 kubelet[2603]: E1101 00:21:42.197174 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.197191 kubelet[2603]: W1101 00:21:42.197189 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.197373 kubelet[2603]: E1101 00:21:42.197205 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.198079 kubelet[2603]: E1101 00:21:42.198043 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.198079 kubelet[2603]: W1101 00:21:42.198067 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.198736 kubelet[2603]: E1101 00:21:42.198085 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.198736 kubelet[2603]: E1101 00:21:42.198445 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.198736 kubelet[2603]: W1101 00:21:42.198460 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.198736 kubelet[2603]: E1101 00:21:42.198478 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.249050 kubelet[2603]: E1101 00:21:42.249007 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.249050 kubelet[2603]: W1101 00:21:42.249066 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.249522 kubelet[2603]: E1101 00:21:42.249097 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.249522 kubelet[2603]: E1101 00:21:42.249517 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.249679 kubelet[2603]: W1101 00:21:42.249533 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.249679 kubelet[2603]: E1101 00:21:42.249550 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.250045 kubelet[2603]: E1101 00:21:42.250001 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.250045 kubelet[2603]: W1101 00:21:42.250017 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.250045 kubelet[2603]: E1101 00:21:42.250033 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.252945 kubelet[2603]: E1101 00:21:42.252919 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.252945 kubelet[2603]: W1101 00:21:42.252942 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.253109 kubelet[2603]: E1101 00:21:42.252961 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.254042 kubelet[2603]: E1101 00:21:42.254013 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.254042 kubelet[2603]: W1101 00:21:42.254039 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.254197 kubelet[2603]: E1101 00:21:42.254056 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.254843 kubelet[2603]: E1101 00:21:42.254818 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.254843 kubelet[2603]: W1101 00:21:42.254842 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.254972 kubelet[2603]: E1101 00:21:42.254859 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.255700 kubelet[2603]: E1101 00:21:42.255674 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.255700 kubelet[2603]: W1101 00:21:42.255698 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.255857 kubelet[2603]: E1101 00:21:42.255715 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.256743 kubelet[2603]: E1101 00:21:42.256721 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.256743 kubelet[2603]: W1101 00:21:42.256742 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.256933 kubelet[2603]: E1101 00:21:42.256762 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.258355 kubelet[2603]: E1101 00:21:42.258331 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.258355 kubelet[2603]: W1101 00:21:42.258355 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.258531 kubelet[2603]: E1101 00:21:42.258372 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.258845 kubelet[2603]: E1101 00:21:42.258823 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.258944 kubelet[2603]: W1101 00:21:42.258845 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.258944 kubelet[2603]: E1101 00:21:42.258862 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.259334 kubelet[2603]: E1101 00:21:42.259314 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.259334 kubelet[2603]: W1101 00:21:42.259331 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.259480 kubelet[2603]: E1101 00:21:42.259347 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.260647 kubelet[2603]: E1101 00:21:42.259839 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.260647 kubelet[2603]: W1101 00:21:42.259854 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.260647 kubelet[2603]: E1101 00:21:42.259870 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.260820 kubelet[2603]: E1101 00:21:42.260751 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.260820 kubelet[2603]: W1101 00:21:42.260765 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.260820 kubelet[2603]: E1101 00:21:42.260782 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.261494 kubelet[2603]: E1101 00:21:42.261467 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.261494 kubelet[2603]: W1101 00:21:42.261492 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.261663 kubelet[2603]: E1101 00:21:42.261510 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.264984 kubelet[2603]: E1101 00:21:42.264950 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.264984 kubelet[2603]: W1101 00:21:42.264973 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.265188 kubelet[2603]: E1101 00:21:42.264991 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.266089 kubelet[2603]: E1101 00:21:42.266056 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.266089 kubelet[2603]: W1101 00:21:42.266078 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.266238 kubelet[2603]: E1101 00:21:42.266095 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.266514 kubelet[2603]: E1101 00:21:42.266494 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.266625 kubelet[2603]: W1101 00:21:42.266516 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.266625 kubelet[2603]: E1101 00:21:42.266534 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.267264 kubelet[2603]: E1101 00:21:42.267223 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:21:42.267264 kubelet[2603]: W1101 00:21:42.267244 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:21:42.267264 kubelet[2603]: E1101 00:21:42.267262 2603 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:21:42.777513 containerd[1460]: time="2025-11-01T00:21:42.777445230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:42.779006 containerd[1460]: time="2025-11-01T00:21:42.778803960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:21:42.781615 containerd[1460]: time="2025-11-01T00:21:42.780185702Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:42.783569 containerd[1460]: time="2025-11-01T00:21:42.783524131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:42.784623 containerd[1460]: time="2025-11-01T00:21:42.784552539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.169431318s" Nov 1 00:21:42.784843 containerd[1460]: time="2025-11-01T00:21:42.784814963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:21:42.792403 containerd[1460]: time="2025-11-01T00:21:42.792338195Z" level=info msg="CreateContainer within sandbox \"1fc5146b3f24ba2c3beed0601e3a893864918cfdb6782c63f3b2b8218d1421d7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:21:42.815296 containerd[1460]: time="2025-11-01T00:21:42.815235830Z" level=info msg="CreateContainer within sandbox \"1fc5146b3f24ba2c3beed0601e3a893864918cfdb6782c63f3b2b8218d1421d7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36\"" Nov 1 00:21:42.816862 containerd[1460]: time="2025-11-01T00:21:42.816823645Z" level=info msg="StartContainer for \"044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36\"" Nov 1 00:21:42.865820 systemd[1]: run-containerd-runc-k8s.io-044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36-runc.0cDmDQ.mount: Deactivated successfully. Nov 1 00:21:42.871834 systemd[1]: Started cri-containerd-044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36.scope - libcontainer container 044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36. Nov 1 00:21:42.919305 containerd[1460]: time="2025-11-01T00:21:42.919251014Z" level=info msg="StartContainer for \"044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36\" returns successfully" Nov 1 00:21:42.938151 systemd[1]: cri-containerd-044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36.scope: Deactivated successfully. Nov 1 00:21:42.974234 kubelet[2603]: E1101 00:21:42.973818 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:21:42.979323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36-rootfs.mount: Deactivated successfully. Nov 1 00:21:43.095028 kubelet[2603]: I1101 00:21:43.094886 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:21:43.121638 kubelet[2603]: I1101 00:21:43.120034 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cd6bccf56-lcdlv" podStartSLOduration=2.7074555829999998 podStartE2EDuration="5.120010761s" podCreationTimestamp="2025-11-01 00:21:38 +0000 UTC" firstStartedPulling="2025-11-01 00:21:39.201268292 +0000 UTC m=+24.484278133" lastFinishedPulling="2025-11-01 00:21:41.613823455 +0000 UTC m=+26.896833311" observedRunningTime="2025-11-01 00:21:42.175800877 +0000 UTC m=+27.458810749" watchObservedRunningTime="2025-11-01 00:21:43.120010761 +0000 UTC m=+28.403020623" Nov 1 00:21:43.664844 containerd[1460]: time="2025-11-01T00:21:43.664763371Z" level=info msg="shim disconnected" id=044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36 namespace=k8s.io Nov 1 00:21:43.664844 containerd[1460]: time="2025-11-01T00:21:43.664845259Z" level=warning msg="cleaning up after shim disconnected" id=044fcae996f63570197ea7d353fd45053b6fdccfb0acf7752806d2ca1b805e36 namespace=k8s.io Nov 1 00:21:43.665428 containerd[1460]: time="2025-11-01T00:21:43.664860485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:21:44.108470 containerd[1460]: time="2025-11-01T00:21:44.107946156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:21:44.963186 kubelet[2603]: E1101 00:21:44.963121 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:21:46.962468 kubelet[2603]: E1101 00:21:46.962385 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:21:47.830857 containerd[1460]: time="2025-11-01T00:21:47.830784596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:47.832323 containerd[1460]: time="2025-11-01T00:21:47.832260482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:21:47.833382 containerd[1460]: time="2025-11-01T00:21:47.833344382Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:47.836740 containerd[1460]: time="2025-11-01T00:21:47.836680819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:47.838018 containerd[1460]: time="2025-11-01T00:21:47.837925212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.729923186s" Nov 1 00:21:47.838018 containerd[1460]: time="2025-11-01T00:21:47.837974802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:21:47.845119 containerd[1460]: time="2025-11-01T00:21:47.845061018Z" level=info msg="CreateContainer within sandbox \"1fc5146b3f24ba2c3beed0601e3a893864918cfdb6782c63f3b2b8218d1421d7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:21:47.869499 containerd[1460]: time="2025-11-01T00:21:47.869432686Z" level=info msg="CreateContainer within sandbox \"1fc5146b3f24ba2c3beed0601e3a893864918cfdb6782c63f3b2b8218d1421d7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97ef8d5ca7de912eb52fd10da34ae18d11e93f5932e611fcc8eab58a3029f130\"" Nov 1 00:21:47.871382 containerd[1460]: time="2025-11-01T00:21:47.871334086Z" level=info msg="StartContainer for \"97ef8d5ca7de912eb52fd10da34ae18d11e93f5932e611fcc8eab58a3029f130\"" Nov 1 00:21:47.936821 systemd[1]: Started cri-containerd-97ef8d5ca7de912eb52fd10da34ae18d11e93f5932e611fcc8eab58a3029f130.scope - libcontainer container 97ef8d5ca7de912eb52fd10da34ae18d11e93f5932e611fcc8eab58a3029f130. Nov 1 00:21:47.981336 containerd[1460]: time="2025-11-01T00:21:47.979035134Z" level=info msg="StartContainer for \"97ef8d5ca7de912eb52fd10da34ae18d11e93f5932e611fcc8eab58a3029f130\" returns successfully" Nov 1 00:21:48.961796 kubelet[2603]: E1101 00:21:48.961264 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:21:48.980063 containerd[1460]: time="2025-11-01T00:21:48.979988904Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:21:48.983282 systemd[1]: cri-containerd-97ef8d5ca7de912eb52fd10da34ae18d11e93f5932e611fcc8eab58a3029f130.scope: Deactivated successfully. Nov 1 00:21:49.024892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97ef8d5ca7de912eb52fd10da34ae18d11e93f5932e611fcc8eab58a3029f130-rootfs.mount: Deactivated successfully. Nov 1 00:21:49.074192 kubelet[2603]: I1101 00:21:49.074158 2603 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:21:49.190448 systemd[1]: Created slice kubepods-burstable-pod4cb2b081_68eb_4d8c_9ca8_d19766928a32.slice - libcontainer container kubepods-burstable-pod4cb2b081_68eb_4d8c_9ca8_d19766928a32.slice. Nov 1 00:21:49.301500 systemd[1]: Created slice kubepods-burstable-podbcea8374_b606_49c4_b6e2_18f85c1c70c0.slice - libcontainer container kubepods-burstable-podbcea8374_b606_49c4_b6e2_18f85c1c70c0.slice. Nov 1 00:21:49.309167 kubelet[2603]: I1101 00:21:49.308989 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cb2b081-68eb-4d8c-9ca8-d19766928a32-config-volume\") pod \"coredns-66bc5c9577-dh4gs\" (UID: \"4cb2b081-68eb-4d8c-9ca8-d19766928a32\") " pod="kube-system/coredns-66bc5c9577-dh4gs" Nov 1 00:21:49.309167 kubelet[2603]: I1101 00:21:49.309033 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpsfd\" (UniqueName: \"kubernetes.io/projected/4cb2b081-68eb-4d8c-9ca8-d19766928a32-kube-api-access-mpsfd\") pod \"coredns-66bc5c9577-dh4gs\" (UID: \"4cb2b081-68eb-4d8c-9ca8-d19766928a32\") " pod="kube-system/coredns-66bc5c9577-dh4gs" Nov 1 00:21:49.410291 kubelet[2603]: I1101 00:21:49.410060 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcea8374-b606-49c4-b6e2-18f85c1c70c0-config-volume\") pod \"coredns-66bc5c9577-mbcv9\" (UID: \"bcea8374-b606-49c4-b6e2-18f85c1c70c0\") " pod="kube-system/coredns-66bc5c9577-mbcv9" Nov 1 00:21:49.410291 kubelet[2603]: I1101 00:21:49.410123 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbd6\" (UniqueName: \"kubernetes.io/projected/bcea8374-b606-49c4-b6e2-18f85c1c70c0-kube-api-access-ppbd6\") pod \"coredns-66bc5c9577-mbcv9\" (UID: \"bcea8374-b606-49c4-b6e2-18f85c1c70c0\") " pod="kube-system/coredns-66bc5c9577-mbcv9" Nov 1 00:21:49.534422 systemd[1]: Created slice kubepods-besteffort-pod4b15d46f_f330_471a_8fc4_3dc35af1a685.slice - libcontainer container kubepods-besteffort-pod4b15d46f_f330_471a_8fc4_3dc35af1a685.slice. Nov 1 00:21:49.580475 containerd[1460]: time="2025-11-01T00:21:49.580297898Z" level=info msg="shim disconnected" id=97ef8d5ca7de912eb52fd10da34ae18d11e93f5932e611fcc8eab58a3029f130 namespace=k8s.io Nov 1 00:21:49.580475 containerd[1460]: time="2025-11-01T00:21:49.580393745Z" level=warning msg="cleaning up after shim disconnected" id=97ef8d5ca7de912eb52fd10da34ae18d11e93f5932e611fcc8eab58a3029f130 namespace=k8s.io Nov 1 00:21:49.582356 containerd[1460]: time="2025-11-01T00:21:49.580408535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:21:49.596879 systemd[1]: Created slice kubepods-besteffort-pod047da681_0394_40f3_ae91_7205aadc4ab4.slice - libcontainer container kubepods-besteffort-pod047da681_0394_40f3_ae91_7205aadc4ab4.slice. Nov 1 00:21:49.601899 containerd[1460]: time="2025-11-01T00:21:49.600047635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dh4gs,Uid:4cb2b081-68eb-4d8c-9ca8-d19766928a32,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:49.614046 kubelet[2603]: I1101 00:21:49.614006 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b15d46f-f330-471a-8fc4-3dc35af1a685-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-gflnh\" (UID: \"4b15d46f-f330-471a-8fc4-3dc35af1a685\") " pod="calico-system/goldmane-7c778bb748-gflnh" Nov 1 00:21:49.615769 kubelet[2603]: I1101 00:21:49.615735 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b15d46f-f330-471a-8fc4-3dc35af1a685-config\") pod \"goldmane-7c778bb748-gflnh\" (UID: \"4b15d46f-f330-471a-8fc4-3dc35af1a685\") " pod="calico-system/goldmane-7c778bb748-gflnh" Nov 1 00:21:49.623178 kubelet[2603]: I1101 00:21:49.621405 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4b15d46f-f330-471a-8fc4-3dc35af1a685-goldmane-key-pair\") pod \"goldmane-7c778bb748-gflnh\" (UID: \"4b15d46f-f330-471a-8fc4-3dc35af1a685\") " pod="calico-system/goldmane-7c778bb748-gflnh" Nov 1 00:21:49.623178 kubelet[2603]: I1101 00:21:49.621784 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2g2\" (UniqueName: \"kubernetes.io/projected/4b15d46f-f330-471a-8fc4-3dc35af1a685-kube-api-access-xx2g2\") pod \"goldmane-7c778bb748-gflnh\" (UID: \"4b15d46f-f330-471a-8fc4-3dc35af1a685\") " pod="calico-system/goldmane-7c778bb748-gflnh" Nov 1 00:21:49.623386 containerd[1460]: time="2025-11-01T00:21:49.622894009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mbcv9,Uid:bcea8374-b606-49c4-b6e2-18f85c1c70c0,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:49.625965 systemd[1]: Created slice kubepods-besteffort-pod160577c0_dc7a_4380_a5c7_096e0298b76b.slice - libcontainer container kubepods-besteffort-pod160577c0_dc7a_4380_a5c7_096e0298b76b.slice. Nov 1 00:21:49.688009 systemd[1]: Created slice kubepods-besteffort-pod203882db_9dc1_4764_9072_f06a1151f6af.slice - libcontainer container kubepods-besteffort-pod203882db_9dc1_4764_9072_f06a1151f6af.slice. Nov 1 00:21:49.713753 systemd[1]: Created slice kubepods-besteffort-podaf0d3006_44cf_49fd_af6f_37984237612e.slice - libcontainer container kubepods-besteffort-podaf0d3006_44cf_49fd_af6f_37984237612e.slice. Nov 1 00:21:49.722880 kubelet[2603]: I1101 00:21:49.722305 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/047da681-0394-40f3-ae91-7205aadc4ab4-tigera-ca-bundle\") pod \"calico-kube-controllers-785c977b7d-jc8q5\" (UID: \"047da681-0394-40f3-ae91-7205aadc4ab4\") " pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" Nov 1 00:21:49.722880 kubelet[2603]: I1101 00:21:49.722382 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/af0d3006-44cf-49fd-af6f-37984237612e-calico-apiserver-certs\") pod \"calico-apiserver-55b4c78ffc-tjjrz\" (UID: \"af0d3006-44cf-49fd-af6f-37984237612e\") " pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" Nov 1 00:21:49.722880 kubelet[2603]: I1101 00:21:49.722413 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/203882db-9dc1-4764-9072-f06a1151f6af-whisker-backend-key-pair\") pod \"whisker-58c88756-g94p6\" (UID: \"203882db-9dc1-4764-9072-f06a1151f6af\") " pod="calico-system/whisker-58c88756-g94p6" Nov 1 00:21:49.722880 kubelet[2603]: I1101 00:21:49.722448 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/203882db-9dc1-4764-9072-f06a1151f6af-whisker-ca-bundle\") pod \"whisker-58c88756-g94p6\" (UID: \"203882db-9dc1-4764-9072-f06a1151f6af\") " pod="calico-system/whisker-58c88756-g94p6" Nov 1 00:21:49.722880 kubelet[2603]: I1101 00:21:49.722480 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klfbs\" (UniqueName: \"kubernetes.io/projected/160577c0-dc7a-4380-a5c7-096e0298b76b-kube-api-access-klfbs\") pod \"calico-apiserver-55b4c78ffc-92jbd\" (UID: \"160577c0-dc7a-4380-a5c7-096e0298b76b\") " pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" Nov 1 00:21:49.723264 kubelet[2603]: I1101 00:21:49.722508 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz6bg\" (UniqueName: \"kubernetes.io/projected/203882db-9dc1-4764-9072-f06a1151f6af-kube-api-access-rz6bg\") pod \"whisker-58c88756-g94p6\" (UID: \"203882db-9dc1-4764-9072-f06a1151f6af\") " pod="calico-system/whisker-58c88756-g94p6" Nov 1 00:21:49.723264 kubelet[2603]: I1101 00:21:49.722551 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbsh\" (UniqueName: \"kubernetes.io/projected/af0d3006-44cf-49fd-af6f-37984237612e-kube-api-access-nlbsh\") pod \"calico-apiserver-55b4c78ffc-tjjrz\" (UID: \"af0d3006-44cf-49fd-af6f-37984237612e\") " pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" Nov 1 00:21:49.725181 kubelet[2603]: I1101 00:21:49.722579 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m55d7\" (UniqueName: \"kubernetes.io/projected/047da681-0394-40f3-ae91-7205aadc4ab4-kube-api-access-m55d7\") pod \"calico-kube-controllers-785c977b7d-jc8q5\" (UID: \"047da681-0394-40f3-ae91-7205aadc4ab4\") " pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" Nov 1 00:21:49.725181 kubelet[2603]: I1101 00:21:49.723534 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/160577c0-dc7a-4380-a5c7-096e0298b76b-calico-apiserver-certs\") pod \"calico-apiserver-55b4c78ffc-92jbd\" (UID: \"160577c0-dc7a-4380-a5c7-096e0298b76b\") " pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" Nov 1 00:21:49.747619 containerd[1460]: time="2025-11-01T00:21:49.746530420Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:21:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 1 00:21:49.850838 containerd[1460]: time="2025-11-01T00:21:49.850691210Z" level=error msg="Failed to destroy network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.851323 containerd[1460]: time="2025-11-01T00:21:49.851271431Z" level=error msg="encountered an error cleaning up failed sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.851426 containerd[1460]: time="2025-11-01T00:21:49.851375349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dh4gs,Uid:4cb2b081-68eb-4d8c-9ca8-d19766928a32,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.855492 kubelet[2603]: E1101 00:21:49.855118 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.858277 kubelet[2603]: E1101 00:21:49.856767 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dh4gs" Nov 1 00:21:49.858277 kubelet[2603]: E1101 00:21:49.856999 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dh4gs" Nov 1 00:21:49.858277 kubelet[2603]: E1101 00:21:49.857180 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dh4gs_kube-system(4cb2b081-68eb-4d8c-9ca8-d19766928a32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dh4gs_kube-system(4cb2b081-68eb-4d8c-9ca8-d19766928a32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dh4gs" podUID="4cb2b081-68eb-4d8c-9ca8-d19766928a32" Nov 1 00:21:49.860316 containerd[1460]: time="2025-11-01T00:21:49.859861316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gflnh,Uid:4b15d46f-f330-471a-8fc4-3dc35af1a685,Namespace:calico-system,Attempt:0,}" Nov 1 00:21:49.872935 containerd[1460]: time="2025-11-01T00:21:49.872879815Z" level=error msg="Failed to destroy network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.874810 containerd[1460]: time="2025-11-01T00:21:49.873541257Z" level=error msg="encountered an error cleaning up failed sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.874810 containerd[1460]: time="2025-11-01T00:21:49.873708685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mbcv9,Uid:bcea8374-b606-49c4-b6e2-18f85c1c70c0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.876284 kubelet[2603]: E1101 00:21:49.876229 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.877134 kubelet[2603]: E1101 00:21:49.877091 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mbcv9" Nov 1 00:21:49.877473 kubelet[2603]: E1101 00:21:49.877444 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mbcv9" Nov 1 00:21:49.878508 kubelet[2603]: E1101 00:21:49.878446 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mbcv9_kube-system(bcea8374-b606-49c4-b6e2-18f85c1c70c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mbcv9_kube-system(bcea8374-b606-49c4-b6e2-18f85c1c70c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mbcv9" podUID="bcea8374-b606-49c4-b6e2-18f85c1c70c0" Nov 1 00:21:49.923038 containerd[1460]: time="2025-11-01T00:21:49.922989931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-785c977b7d-jc8q5,Uid:047da681-0394-40f3-ae91-7205aadc4ab4,Namespace:calico-system,Attempt:0,}" Nov 1 00:21:49.942032 containerd[1460]: time="2025-11-01T00:21:49.941716086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4c78ffc-92jbd,Uid:160577c0-dc7a-4380-a5c7-096e0298b76b,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:21:49.978972 containerd[1460]: time="2025-11-01T00:21:49.978883730Z" level=error msg="Failed to destroy network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.980550 containerd[1460]: time="2025-11-01T00:21:49.980475714Z" level=error msg="encountered an error cleaning up failed sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.981307 containerd[1460]: time="2025-11-01T00:21:49.980554916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gflnh,Uid:4b15d46f-f330-471a-8fc4-3dc35af1a685,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.981529 kubelet[2603]: E1101 00:21:49.981317 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:49.981529 kubelet[2603]: E1101 00:21:49.981388 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-gflnh" Nov 1 00:21:49.981529 kubelet[2603]: E1101 00:21:49.981418 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-gflnh" Nov 1 00:21:49.982882 kubelet[2603]: E1101 00:21:49.981502 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-gflnh_calico-system(4b15d46f-f330-471a-8fc4-3dc35af1a685)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-gflnh_calico-system(4b15d46f-f330-471a-8fc4-3dc35af1a685)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:21:50.008858 containerd[1460]: time="2025-11-01T00:21:50.008617473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58c88756-g94p6,Uid:203882db-9dc1-4764-9072-f06a1151f6af,Namespace:calico-system,Attempt:0,}" Nov 1 00:21:50.044698 containerd[1460]: time="2025-11-01T00:21:50.044635042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4c78ffc-tjjrz,Uid:af0d3006-44cf-49fd-af6f-37984237612e,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:21:50.077200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb-shm.mount: Deactivated successfully. Nov 1 00:21:50.147629 containerd[1460]: time="2025-11-01T00:21:50.146764630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:21:50.156196 kubelet[2603]: I1101 00:21:50.154215 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:21:50.156351 containerd[1460]: time="2025-11-01T00:21:50.155957454Z" level=info msg="StopPodSandbox for \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\"" Nov 1 00:21:50.157673 containerd[1460]: time="2025-11-01T00:21:50.157636952Z" level=info msg="Ensure that sandbox 2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44 in task-service has been cleanup successfully" Nov 1 00:21:50.169641 kubelet[2603]: I1101 00:21:50.167911 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:21:50.172439 containerd[1460]: time="2025-11-01T00:21:50.171942564Z" level=info msg="StopPodSandbox for \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\"" Nov 1 00:21:50.175137 containerd[1460]: time="2025-11-01T00:21:50.174659882Z" level=info msg="Ensure that sandbox a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb in task-service has been cleanup successfully" Nov 1 00:21:50.177976 kubelet[2603]: I1101 00:21:50.177946 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:21:50.186375 containerd[1460]: time="2025-11-01T00:21:50.186328820Z" level=info msg="StopPodSandbox for \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\"" Nov 1 00:21:50.186715 containerd[1460]: time="2025-11-01T00:21:50.186622621Z" level=info msg="Ensure that sandbox 3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2 in task-service has been cleanup successfully" Nov 1 00:21:50.237033 containerd[1460]: time="2025-11-01T00:21:50.236744876Z" level=error msg="Failed to destroy network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.244816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9-shm.mount: Deactivated successfully. Nov 1 00:21:50.269458 containerd[1460]: time="2025-11-01T00:21:50.269354656Z" level=error msg="encountered an error cleaning up failed sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.269660 containerd[1460]: time="2025-11-01T00:21:50.269477690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-785c977b7d-jc8q5,Uid:047da681-0394-40f3-ae91-7205aadc4ab4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.275114 kubelet[2603]: E1101 00:21:50.275062 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.275420 kubelet[2603]: E1101 00:21:50.275388 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" Nov 1 00:21:50.275673 kubelet[2603]: E1101 00:21:50.275632 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" Nov 1 00:21:50.276010 kubelet[2603]: E1101 00:21:50.275963 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-785c977b7d-jc8q5_calico-system(047da681-0394-40f3-ae91-7205aadc4ab4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-785c977b7d-jc8q5_calico-system(047da681-0394-40f3-ae91-7205aadc4ab4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:21:50.297116 containerd[1460]: time="2025-11-01T00:21:50.296888612Z" level=error msg="StopPodSandbox for \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\" failed" error="failed to destroy network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.297300 kubelet[2603]: E1101 00:21:50.297217 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:21:50.297397 kubelet[2603]: E1101 00:21:50.297317 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb"} Nov 1 00:21:50.297458 kubelet[2603]: E1101 00:21:50.297416 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4cb2b081-68eb-4d8c-9ca8-d19766928a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:21:50.297571 kubelet[2603]: E1101 00:21:50.297462 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4cb2b081-68eb-4d8c-9ca8-d19766928a32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dh4gs" podUID="4cb2b081-68eb-4d8c-9ca8-d19766928a32" Nov 1 00:21:50.333547 containerd[1460]: time="2025-11-01T00:21:50.333482702Z" level=error msg="Failed to destroy network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.342439 containerd[1460]: time="2025-11-01T00:21:50.342368387Z" level=error msg="encountered an error cleaning up failed sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.342605 containerd[1460]: time="2025-11-01T00:21:50.342474932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4c78ffc-92jbd,Uid:160577c0-dc7a-4380-a5c7-096e0298b76b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.344248 kubelet[2603]: E1101 00:21:50.342968 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.344248 kubelet[2603]: E1101 00:21:50.343045 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" Nov 1 00:21:50.344248 kubelet[2603]: E1101 00:21:50.343084 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" Nov 1 00:21:50.344505 kubelet[2603]: E1101 00:21:50.343159 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55b4c78ffc-92jbd_calico-apiserver(160577c0-dc7a-4380-a5c7-096e0298b76b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55b4c78ffc-92jbd_calico-apiserver(160577c0-dc7a-4380-a5c7-096e0298b76b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:21:50.353784 containerd[1460]: time="2025-11-01T00:21:50.353719565Z" level=error msg="StopPodSandbox for \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\" failed" error="failed to destroy network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.354829 kubelet[2603]: E1101 00:21:50.354018 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:21:50.354829 kubelet[2603]: E1101 00:21:50.354074 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44"} Nov 1 00:21:50.354829 kubelet[2603]: E1101 00:21:50.354117 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bcea8374-b606-49c4-b6e2-18f85c1c70c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:21:50.354829 kubelet[2603]: E1101 00:21:50.354166 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bcea8374-b606-49c4-b6e2-18f85c1c70c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mbcv9" podUID="bcea8374-b606-49c4-b6e2-18f85c1c70c0" Nov 1 00:21:50.372566 containerd[1460]: time="2025-11-01T00:21:50.371880521Z" level=error msg="StopPodSandbox for \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\" failed" error="failed to destroy network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.372749 kubelet[2603]: E1101 00:21:50.372218 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:21:50.372749 kubelet[2603]: E1101 00:21:50.372286 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2"} Nov 1 00:21:50.372749 kubelet[2603]: E1101 00:21:50.372334 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b15d46f-f330-471a-8fc4-3dc35af1a685\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:21:50.372749 kubelet[2603]: E1101 00:21:50.372374 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b15d46f-f330-471a-8fc4-3dc35af1a685\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:21:50.395691 containerd[1460]: time="2025-11-01T00:21:50.394868771Z" level=error msg="Failed to destroy network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.395691 containerd[1460]: time="2025-11-01T00:21:50.395511901Z" level=error msg="encountered an error cleaning up failed sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.395691 containerd[1460]: time="2025-11-01T00:21:50.395624525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58c88756-g94p6,Uid:203882db-9dc1-4764-9072-f06a1151f6af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.396165 kubelet[2603]: E1101 00:21:50.396114 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.396264 kubelet[2603]: E1101 00:21:50.396194 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58c88756-g94p6" Nov 1 00:21:50.396264 kubelet[2603]: E1101 00:21:50.396229 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58c88756-g94p6" Nov 1 00:21:50.396372 kubelet[2603]: E1101 00:21:50.396312 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58c88756-g94p6_calico-system(203882db-9dc1-4764-9072-f06a1151f6af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58c88756-g94p6_calico-system(203882db-9dc1-4764-9072-f06a1151f6af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58c88756-g94p6" podUID="203882db-9dc1-4764-9072-f06a1151f6af" Nov 1 00:21:50.403390 containerd[1460]: time="2025-11-01T00:21:50.403134973Z" level=error msg="Failed to destroy network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.403868 containerd[1460]: time="2025-11-01T00:21:50.403567478Z" level=error msg="encountered an error cleaning up failed sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.403868 containerd[1460]: time="2025-11-01T00:21:50.403663610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4c78ffc-tjjrz,Uid:af0d3006-44cf-49fd-af6f-37984237612e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.404065 kubelet[2603]: E1101 00:21:50.403909 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:50.404065 kubelet[2603]: E1101 00:21:50.403978 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" Nov 1 00:21:50.404065 kubelet[2603]: E1101 00:21:50.404041 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" Nov 1 00:21:50.404237 kubelet[2603]: E1101 00:21:50.404117 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55b4c78ffc-tjjrz_calico-apiserver(af0d3006-44cf-49fd-af6f-37984237612e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55b4c78ffc-tjjrz_calico-apiserver(af0d3006-44cf-49fd-af6f-37984237612e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:21:50.971342 systemd[1]: Created slice kubepods-besteffort-poda13cec52_774e_41dd_8b73_7a0c3559c1e0.slice - libcontainer container kubepods-besteffort-poda13cec52_774e_41dd_8b73_7a0c3559c1e0.slice. Nov 1 00:21:50.977477 containerd[1460]: time="2025-11-01T00:21:50.977412914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvqzr,Uid:a13cec52-774e-41dd-8b73-7a0c3559c1e0,Namespace:calico-system,Attempt:0,}" Nov 1 00:21:51.028761 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76-shm.mount: Deactivated successfully. Nov 1 00:21:51.028927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3-shm.mount: Deactivated successfully. Nov 1 00:21:51.029035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93-shm.mount: Deactivated successfully. Nov 1 00:21:51.071886 containerd[1460]: time="2025-11-01T00:21:51.071711573Z" level=error msg="Failed to destroy network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:51.074989 containerd[1460]: time="2025-11-01T00:21:51.074767256Z" level=error msg="encountered an error cleaning up failed sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:51.074989 containerd[1460]: time="2025-11-01T00:21:51.074852079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvqzr,Uid:a13cec52-774e-41dd-8b73-7a0c3559c1e0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:51.077269 kubelet[2603]: E1101 00:21:51.075364 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:51.077269 kubelet[2603]: E1101 00:21:51.075444 2603 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvqzr" Nov 1 00:21:51.077269 kubelet[2603]: E1101 00:21:51.075476 2603 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cvqzr" Nov 1 00:21:51.080447 kubelet[2603]: E1101 00:21:51.075562 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:21:51.079478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729-shm.mount: Deactivated successfully. Nov 1 00:21:51.182990 kubelet[2603]: I1101 00:21:51.182930 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:21:51.184441 containerd[1460]: time="2025-11-01T00:21:51.184244012Z" level=info msg="StopPodSandbox for \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\"" Nov 1 00:21:51.186430 containerd[1460]: time="2025-11-01T00:21:51.185179888Z" level=info msg="Ensure that sandbox 18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3 in task-service has been cleanup successfully" Nov 1 00:21:51.189608 kubelet[2603]: I1101 00:21:51.189561 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:21:51.192025 containerd[1460]: time="2025-11-01T00:21:51.191840035Z" level=info msg="StopPodSandbox for \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\"" Nov 1 00:21:51.195456 containerd[1460]: time="2025-11-01T00:21:51.194347099Z" level=info msg="Ensure that sandbox 001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729 in task-service has been cleanup successfully" Nov 1 00:21:51.209760 kubelet[2603]: I1101 00:21:51.207425 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:21:51.210358 containerd[1460]: time="2025-11-01T00:21:51.210304624Z" level=info msg="StopPodSandbox for \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\"" Nov 1 00:21:51.212743 containerd[1460]: time="2025-11-01T00:21:51.212671442Z" level=info msg="Ensure that sandbox d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93 in task-service has been cleanup successfully" Nov 1 00:21:51.218464 kubelet[2603]: I1101 00:21:51.218431 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:21:51.220012 containerd[1460]: time="2025-11-01T00:21:51.219789396Z" level=info msg="StopPodSandbox for \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\"" Nov 1 00:21:51.220659 containerd[1460]: time="2025-11-01T00:21:51.220242915Z" level=info msg="Ensure that sandbox d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76 in task-service has been cleanup successfully" Nov 1 00:21:51.246128 kubelet[2603]: I1101 00:21:51.245999 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:21:51.251621 containerd[1460]: time="2025-11-01T00:21:51.251103407Z" level=info msg="StopPodSandbox for \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\"" Nov 1 00:21:51.251621 containerd[1460]: time="2025-11-01T00:21:51.251361924Z" level=info msg="Ensure that sandbox 750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9 in task-service has been cleanup successfully" Nov 1 00:21:51.359959 containerd[1460]: time="2025-11-01T00:21:51.359895074Z" level=error msg="StopPodSandbox for \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\" failed" error="failed to destroy network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:51.360657 kubelet[2603]: E1101 00:21:51.360425 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:21:51.360657 kubelet[2603]: E1101 00:21:51.360494 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3"} Nov 1 00:21:51.360657 kubelet[2603]: E1101 00:21:51.360542 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"203882db-9dc1-4764-9072-f06a1151f6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:21:51.360657 kubelet[2603]: E1101 00:21:51.360609 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"203882db-9dc1-4764-9072-f06a1151f6af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58c88756-g94p6" podUID="203882db-9dc1-4764-9072-f06a1151f6af" Nov 1 00:21:51.378681 containerd[1460]: time="2025-11-01T00:21:51.378550204Z" level=error msg="StopPodSandbox for \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\" failed" error="failed to destroy network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:51.379190 kubelet[2603]: E1101 00:21:51.378982 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:21:51.379190 kubelet[2603]: E1101 00:21:51.379049 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9"} Nov 1 00:21:51.379190 kubelet[2603]: E1101 00:21:51.379103 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"047da681-0394-40f3-ae91-7205aadc4ab4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:21:51.379190 kubelet[2603]: E1101 00:21:51.379148 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"047da681-0394-40f3-ae91-7205aadc4ab4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:21:51.411853 containerd[1460]: time="2025-11-01T00:21:51.411793813Z" level=error msg="StopPodSandbox for \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\" failed" error="failed to destroy network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:51.412567 kubelet[2603]: E1101 00:21:51.412497 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:21:51.412696 kubelet[2603]: E1101 00:21:51.412606 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76"} Nov 1 00:21:51.412696 kubelet[2603]: E1101 00:21:51.412655 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af0d3006-44cf-49fd-af6f-37984237612e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:21:51.412889 kubelet[2603]: E1101 00:21:51.412702 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af0d3006-44cf-49fd-af6f-37984237612e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:21:51.413071 containerd[1460]: time="2025-11-01T00:21:51.413023131Z" level=error msg="StopPodSandbox for \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\" failed" error="failed to destroy network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:51.413414 kubelet[2603]: E1101 00:21:51.413367 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:21:51.413526 kubelet[2603]: E1101 00:21:51.413428 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729"} Nov 1 00:21:51.413526 kubelet[2603]: E1101 00:21:51.413470 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a13cec52-774e-41dd-8b73-7a0c3559c1e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:21:51.413526 kubelet[2603]: E1101 00:21:51.413509 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a13cec52-774e-41dd-8b73-7a0c3559c1e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:21:51.416387 containerd[1460]: time="2025-11-01T00:21:51.416346033Z" level=error msg="StopPodSandbox for \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\" failed" error="failed to destroy network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:21:51.416772 kubelet[2603]: E1101 00:21:51.416735 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:21:51.416931 kubelet[2603]: E1101 00:21:51.416912 2603 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93"} Nov 1 00:21:51.417074 kubelet[2603]: E1101 00:21:51.417052 2603 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"160577c0-dc7a-4380-a5c7-096e0298b76b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:21:51.417250 kubelet[2603]: E1101 00:21:51.417221 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"160577c0-dc7a-4380-a5c7-096e0298b76b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:21:57.754008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949095985.mount: Deactivated successfully. Nov 1 00:21:57.791116 containerd[1460]: time="2025-11-01T00:21:57.791022583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:57.792564 containerd[1460]: time="2025-11-01T00:21:57.792502214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:21:57.795631 containerd[1460]: time="2025-11-01T00:21:57.793755019Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:57.797438 containerd[1460]: time="2025-11-01T00:21:57.797393303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:57.798542 containerd[1460]: time="2025-11-01T00:21:57.798499708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.65167308s" Nov 1 00:21:57.798743 containerd[1460]: time="2025-11-01T00:21:57.798712190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:21:57.826232 containerd[1460]: time="2025-11-01T00:21:57.826174096Z" level=info msg="CreateContainer within sandbox \"1fc5146b3f24ba2c3beed0601e3a893864918cfdb6782c63f3b2b8218d1421d7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:21:57.853316 containerd[1460]: time="2025-11-01T00:21:57.853259995Z" level=info msg="CreateContainer within sandbox \"1fc5146b3f24ba2c3beed0601e3a893864918cfdb6782c63f3b2b8218d1421d7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4c6eeb65c733ca709847ed998442981c8c9995396d0b8f50defffe1790e1f28f\"" Nov 1 00:21:57.854398 containerd[1460]: time="2025-11-01T00:21:57.854122842Z" level=info msg="StartContainer for \"4c6eeb65c733ca709847ed998442981c8c9995396d0b8f50defffe1790e1f28f\"" Nov 1 00:21:57.896828 systemd[1]: Started cri-containerd-4c6eeb65c733ca709847ed998442981c8c9995396d0b8f50defffe1790e1f28f.scope - libcontainer container 4c6eeb65c733ca709847ed998442981c8c9995396d0b8f50defffe1790e1f28f. Nov 1 00:21:57.939795 containerd[1460]: time="2025-11-01T00:21:57.939737842Z" level=info msg="StartContainer for \"4c6eeb65c733ca709847ed998442981c8c9995396d0b8f50defffe1790e1f28f\" returns successfully" Nov 1 00:21:58.083164 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:21:58.083328 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:21:58.210957 containerd[1460]: time="2025-11-01T00:21:58.210131196Z" level=info msg="StopPodSandbox for \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\"" Nov 1 00:21:58.344966 kubelet[2603]: I1101 00:21:58.344730 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-498p2" podStartSLOduration=1.786035592 podStartE2EDuration="20.344703092s" podCreationTimestamp="2025-11-01 00:21:38 +0000 UTC" firstStartedPulling="2025-11-01 00:21:39.241171935 +0000 UTC m=+24.524181779" lastFinishedPulling="2025-11-01 00:21:57.79983942 +0000 UTC m=+43.082849279" observedRunningTime="2025-11-01 00:21:58.328505774 +0000 UTC m=+43.611515646" watchObservedRunningTime="2025-11-01 00:21:58.344703092 +0000 UTC m=+43.627712961" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.348 [INFO][3809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.348 [INFO][3809] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" iface="eth0" netns="/var/run/netns/cni-9d745ad5-9cae-527d-a609-41339fb8eb76" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.349 [INFO][3809] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" iface="eth0" netns="/var/run/netns/cni-9d745ad5-9cae-527d-a609-41339fb8eb76" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.349 [INFO][3809] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" iface="eth0" netns="/var/run/netns/cni-9d745ad5-9cae-527d-a609-41339fb8eb76" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.349 [INFO][3809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.350 [INFO][3809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.397 [INFO][3838] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" HandleID="k8s-pod-network.18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.399 [INFO][3838] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.399 [INFO][3838] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.414 [WARNING][3838] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" HandleID="k8s-pod-network.18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.415 [INFO][3838] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" HandleID="k8s-pod-network.18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.420 [INFO][3838] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:21:58.446582 containerd[1460]: 2025-11-01 00:21:58.435 [INFO][3809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:21:58.448344 containerd[1460]: time="2025-11-01T00:21:58.448122480Z" level=info msg="TearDown network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\" successfully" Nov 1 00:21:58.448344 containerd[1460]: time="2025-11-01T00:21:58.448170266Z" level=info msg="StopPodSandbox for \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\" returns successfully" Nov 1 00:21:58.597211 kubelet[2603]: I1101 00:21:58.597028 2603 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/203882db-9dc1-4764-9072-f06a1151f6af-whisker-ca-bundle\") pod \"203882db-9dc1-4764-9072-f06a1151f6af\" (UID: \"203882db-9dc1-4764-9072-f06a1151f6af\") " Nov 1 00:21:58.597734 kubelet[2603]: I1101 00:21:58.597676 2603 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/203882db-9dc1-4764-9072-f06a1151f6af-whisker-backend-key-pair\") pod \"203882db-9dc1-4764-9072-f06a1151f6af\" (UID: \"203882db-9dc1-4764-9072-f06a1151f6af\") " Nov 1 00:21:58.597846 kubelet[2603]: I1101 00:21:58.597760 2603 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz6bg\" (UniqueName: \"kubernetes.io/projected/203882db-9dc1-4764-9072-f06a1151f6af-kube-api-access-rz6bg\") pod \"203882db-9dc1-4764-9072-f06a1151f6af\" (UID: \"203882db-9dc1-4764-9072-f06a1151f6af\") " Nov 1 00:21:58.599859 kubelet[2603]: I1101 00:21:58.599729 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/203882db-9dc1-4764-9072-f06a1151f6af-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "203882db-9dc1-4764-9072-f06a1151f6af" (UID: "203882db-9dc1-4764-9072-f06a1151f6af"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:21:58.607889 kubelet[2603]: I1101 00:21:58.607838 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203882db-9dc1-4764-9072-f06a1151f6af-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "203882db-9dc1-4764-9072-f06a1151f6af" (UID: "203882db-9dc1-4764-9072-f06a1151f6af"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:21:58.608050 kubelet[2603]: I1101 00:21:58.607898 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/203882db-9dc1-4764-9072-f06a1151f6af-kube-api-access-rz6bg" (OuterVolumeSpecName: "kube-api-access-rz6bg") pod "203882db-9dc1-4764-9072-f06a1151f6af" (UID: "203882db-9dc1-4764-9072-f06a1151f6af"). InnerVolumeSpecName "kube-api-access-rz6bg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:21:58.699157 kubelet[2603]: I1101 00:21:58.699081 2603 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/203882db-9dc1-4764-9072-f06a1151f6af-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" DevicePath \"\"" Nov 1 00:21:58.699157 kubelet[2603]: I1101 00:21:58.699131 2603 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/203882db-9dc1-4764-9072-f06a1151f6af-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" DevicePath \"\"" Nov 1 00:21:58.699157 kubelet[2603]: I1101 00:21:58.699152 2603 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rz6bg\" (UniqueName: \"kubernetes.io/projected/203882db-9dc1-4764-9072-f06a1151f6af-kube-api-access-rz6bg\") on node \"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9\" DevicePath \"\"" Nov 1 00:21:58.756917 systemd[1]: run-netns-cni\x2d9d745ad5\x2d9cae\x2d527d\x2da609\x2d41339fb8eb76.mount: Deactivated successfully. Nov 1 00:21:58.757052 systemd[1]: var-lib-kubelet-pods-203882db\x2d9dc1\x2d4764\x2d9072\x2df06a1151f6af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drz6bg.mount: Deactivated successfully. Nov 1 00:21:58.757171 systemd[1]: var-lib-kubelet-pods-203882db\x2d9dc1\x2d4764\x2d9072\x2df06a1151f6af-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:21:58.888491 kubelet[2603]: I1101 00:21:58.888227 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:21:58.971634 systemd[1]: Removed slice kubepods-besteffort-pod203882db_9dc1_4764_9072_f06a1151f6af.slice - libcontainer container kubepods-besteffort-pod203882db_9dc1_4764_9072_f06a1151f6af.slice. Nov 1 00:21:59.340048 systemd[1]: run-containerd-runc-k8s.io-4c6eeb65c733ca709847ed998442981c8c9995396d0b8f50defffe1790e1f28f-runc.uWKL85.mount: Deactivated successfully. Nov 1 00:21:59.402496 systemd[1]: Created slice kubepods-besteffort-podd224b46c_00ee_4398_aed6_0fb0a4fe6275.slice - libcontainer container kubepods-besteffort-podd224b46c_00ee_4398_aed6_0fb0a4fe6275.slice. Nov 1 00:21:59.504953 kubelet[2603]: I1101 00:21:59.504898 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d224b46c-00ee-4398-aed6-0fb0a4fe6275-whisker-backend-key-pair\") pod \"whisker-6b64bcb7c8-98hnq\" (UID: \"d224b46c-00ee-4398-aed6-0fb0a4fe6275\") " pod="calico-system/whisker-6b64bcb7c8-98hnq" Nov 1 00:21:59.504953 kubelet[2603]: I1101 00:21:59.504958 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49wcc\" (UniqueName: \"kubernetes.io/projected/d224b46c-00ee-4398-aed6-0fb0a4fe6275-kube-api-access-49wcc\") pod \"whisker-6b64bcb7c8-98hnq\" (UID: \"d224b46c-00ee-4398-aed6-0fb0a4fe6275\") " pod="calico-system/whisker-6b64bcb7c8-98hnq" Nov 1 00:21:59.504953 kubelet[2603]: I1101 00:21:59.505000 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d224b46c-00ee-4398-aed6-0fb0a4fe6275-whisker-ca-bundle\") pod \"whisker-6b64bcb7c8-98hnq\" (UID: \"d224b46c-00ee-4398-aed6-0fb0a4fe6275\") " pod="calico-system/whisker-6b64bcb7c8-98hnq" Nov 1 00:21:59.719032 containerd[1460]: time="2025-11-01T00:21:59.718367926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b64bcb7c8-98hnq,Uid:d224b46c-00ee-4398-aed6-0fb0a4fe6275,Namespace:calico-system,Attempt:0,}" Nov 1 00:21:59.989065 systemd-networkd[1357]: cali774cb968d63: Link UP Nov 1 00:21:59.991795 systemd-networkd[1357]: cali774cb968d63: Gained carrier Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.823 [INFO][3934] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.856 [INFO][3934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0 whisker-6b64bcb7c8- calico-system d224b46c-00ee-4398-aed6-0fb0a4fe6275 940 0 2025-11-01 00:21:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b64bcb7c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9 whisker-6b64bcb7c8-98hnq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali774cb968d63 [] [] }} ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Namespace="calico-system" Pod="whisker-6b64bcb7c8-98hnq" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.857 [INFO][3934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Namespace="calico-system" Pod="whisker-6b64bcb7c8-98hnq" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.912 [INFO][3978] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" HandleID="k8s-pod-network.c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.912 [INFO][3978] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" HandleID="k8s-pod-network.c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", "pod":"whisker-6b64bcb7c8-98hnq", "timestamp":"2025-11-01 00:21:59.912446152 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.912 [INFO][3978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.912 [INFO][3978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.912 [INFO][3978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.931 [INFO][3978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.938 [INFO][3978] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.946 [INFO][3978] ipam/ipam.go 511: Trying affinity for 192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.949 [INFO][3978] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.952 [INFO][3978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.952 [INFO][3978] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.954 [INFO][3978] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359 Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.959 [INFO][3978] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.970 [INFO][3978] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.106.65/26] block=192.168.106.64/26 handle="k8s-pod-network.c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.970 [INFO][3978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.65/26] handle="k8s-pod-network.c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.970 [INFO][3978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:00.036893 containerd[1460]: 2025-11-01 00:21:59.970 [INFO][3978] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.106.65/26] IPv6=[] ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" HandleID="k8s-pod-network.c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" Nov 1 00:22:00.038192 containerd[1460]: 2025-11-01 00:21:59.973 [INFO][3934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Namespace="calico-system" Pod="whisker-6b64bcb7c8-98hnq" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0", GenerateName:"whisker-6b64bcb7c8-", Namespace:"calico-system", SelfLink:"", UID:"d224b46c-00ee-4398-aed6-0fb0a4fe6275", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b64bcb7c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"", Pod:"whisker-6b64bcb7c8-98hnq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali774cb968d63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:00.038192 containerd[1460]: 2025-11-01 00:21:59.973 [INFO][3934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.65/32] ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Namespace="calico-system" Pod="whisker-6b64bcb7c8-98hnq" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" Nov 1 00:22:00.038192 containerd[1460]: 2025-11-01 00:21:59.973 [INFO][3934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali774cb968d63 ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Namespace="calico-system" Pod="whisker-6b64bcb7c8-98hnq" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" Nov 1 00:22:00.038192 containerd[1460]: 2025-11-01 00:21:59.989 [INFO][3934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Namespace="calico-system" Pod="whisker-6b64bcb7c8-98hnq" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" Nov 1 00:22:00.038192 containerd[1460]: 2025-11-01 00:21:59.991 [INFO][3934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Namespace="calico-system" Pod="whisker-6b64bcb7c8-98hnq" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0", GenerateName:"whisker-6b64bcb7c8-", Namespace:"calico-system", SelfLink:"", UID:"d224b46c-00ee-4398-aed6-0fb0a4fe6275", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b64bcb7c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359", Pod:"whisker-6b64bcb7c8-98hnq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali774cb968d63", MAC:"0e:84:76:a6:ab:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:00.038192 containerd[1460]: 2025-11-01 00:22:00.015 [INFO][3934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359" Namespace="calico-system" Pod="whisker-6b64bcb7c8-98hnq" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--6b64bcb7c8--98hnq-eth0" Nov 1 00:22:00.086691 containerd[1460]: time="2025-11-01T00:22:00.086500567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:00.088736 containerd[1460]: time="2025-11-01T00:22:00.086914400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:00.089098 containerd[1460]: time="2025-11-01T00:22:00.087015032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:00.089850 containerd[1460]: time="2025-11-01T00:22:00.089460279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:00.165841 systemd[1]: Started cri-containerd-c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359.scope - libcontainer container c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359. Nov 1 00:22:00.278199 containerd[1460]: time="2025-11-01T00:22:00.277923242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b64bcb7c8-98hnq,Uid:d224b46c-00ee-4398-aed6-0fb0a4fe6275,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9d8fdfb63ab90b1e6f0fdc8dd661a5e7a461d487ab94290a85d39900ed04359\"" Nov 1 00:22:00.285122 containerd[1460]: time="2025-11-01T00:22:00.284783451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:22:00.500732 containerd[1460]: time="2025-11-01T00:22:00.500672715Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:00.502708 containerd[1460]: time="2025-11-01T00:22:00.502627530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:22:00.502845 containerd[1460]: time="2025-11-01T00:22:00.502768491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:22:00.503413 kubelet[2603]: E1101 00:22:00.503094 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:00.503413 kubelet[2603]: E1101 00:22:00.503163 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:00.503676 kubelet[2603]: E1101 00:22:00.503541 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b64bcb7c8-98hnq_calico-system(d224b46c-00ee-4398-aed6-0fb0a4fe6275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:00.506713 containerd[1460]: time="2025-11-01T00:22:00.506673396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:22:00.689005 kernel: bpftool[4077]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:22:00.724004 containerd[1460]: time="2025-11-01T00:22:00.723762683Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:00.725772 containerd[1460]: time="2025-11-01T00:22:00.725569981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:22:00.725772 containerd[1460]: time="2025-11-01T00:22:00.725612172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:00.728302 kubelet[2603]: E1101 00:22:00.726135 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:00.728302 kubelet[2603]: E1101 00:22:00.726200 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:00.728302 kubelet[2603]: E1101 00:22:00.726299 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b64bcb7c8-98hnq_calico-system(d224b46c-00ee-4398-aed6-0fb0a4fe6275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:00.728986 kubelet[2603]: E1101 00:22:00.726363 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b64bcb7c8-98hnq" podUID="d224b46c-00ee-4398-aed6-0fb0a4fe6275" Nov 1 00:22:00.963435 containerd[1460]: time="2025-11-01T00:22:00.961844199Z" level=info msg="StopPodSandbox for \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\"" Nov 1 00:22:00.969064 kubelet[2603]: I1101 00:22:00.969022 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="203882db-9dc1-4764-9072-f06a1151f6af" path="/var/lib/kubelet/pods/203882db-9dc1-4764-9072-f06a1151f6af/volumes" Nov 1 00:22:01.152960 systemd-networkd[1357]: vxlan.calico: Link UP Nov 1 00:22:01.152977 systemd-networkd[1357]: vxlan.calico: Gained carrier Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.066 [INFO][4102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.066 [INFO][4102] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" iface="eth0" netns="/var/run/netns/cni-72185471-aee7-6e3d-6547-6762bc9927d1" Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.067 [INFO][4102] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" iface="eth0" netns="/var/run/netns/cni-72185471-aee7-6e3d-6547-6762bc9927d1" Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.067 [INFO][4102] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" iface="eth0" netns="/var/run/netns/cni-72185471-aee7-6e3d-6547-6762bc9927d1" Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.067 [INFO][4102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.067 [INFO][4102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.160 [INFO][4111] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" HandleID="k8s-pod-network.2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.160 [INFO][4111] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.160 [INFO][4111] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.174 [WARNING][4111] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" HandleID="k8s-pod-network.2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.174 [INFO][4111] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" HandleID="k8s-pod-network.2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.177 [INFO][4111] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:01.185748 containerd[1460]: 2025-11-01 00:22:01.180 [INFO][4102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:01.186750 containerd[1460]: time="2025-11-01T00:22:01.186683683Z" level=info msg="TearDown network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\" successfully" Nov 1 00:22:01.186750 containerd[1460]: time="2025-11-01T00:22:01.186750267Z" level=info msg="StopPodSandbox for \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\" returns successfully" Nov 1 00:22:01.196625 containerd[1460]: time="2025-11-01T00:22:01.196560100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mbcv9,Uid:bcea8374-b606-49c4-b6e2-18f85c1c70c0,Namespace:kube-system,Attempt:1,}" Nov 1 00:22:01.199389 systemd[1]: run-netns-cni\x2d72185471\x2daee7\x2d6e3d\x2d6547\x2d6762bc9927d1.mount: Deactivated successfully. Nov 1 00:22:01.306734 kubelet[2603]: E1101 00:22:01.305515 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b64bcb7c8-98hnq" podUID="d224b46c-00ee-4398-aed6-0fb0a4fe6275" Nov 1 00:22:01.522978 systemd-networkd[1357]: cali22a94cf026c: Link UP Nov 1 00:22:01.523327 systemd-networkd[1357]: cali22a94cf026c: Gained carrier Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.342 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0 coredns-66bc5c9577- kube-system bcea8374-b606-49c4-b6e2-18f85c1c70c0 955 0 2025-11-01 00:21:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9 coredns-66bc5c9577-mbcv9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali22a94cf026c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Namespace="kube-system" Pod="coredns-66bc5c9577-mbcv9" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.343 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Namespace="kube-system" Pod="coredns-66bc5c9577-mbcv9" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.423 [INFO][4133] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" HandleID="k8s-pod-network.87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.424 [INFO][4133] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" HandleID="k8s-pod-network.87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011f910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", "pod":"coredns-66bc5c9577-mbcv9", "timestamp":"2025-11-01 00:22:01.423931381 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.424 [INFO][4133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.424 [INFO][4133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.425 [INFO][4133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.438 [INFO][4133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.446 [INFO][4133] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.454 [INFO][4133] ipam/ipam.go 511: Trying affinity for 192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.460 [INFO][4133] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.466 [INFO][4133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.466 [INFO][4133] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.476 [INFO][4133] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1 Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.491 [INFO][4133] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.510 [INFO][4133] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.106.66/26] block=192.168.106.64/26 handle="k8s-pod-network.87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.511 [INFO][4133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.66/26] handle="k8s-pod-network.87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.511 [INFO][4133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:01.556921 containerd[1460]: 2025-11-01 00:22:01.512 [INFO][4133] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.106.66/26] IPv6=[] ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" HandleID="k8s-pod-network.87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.563831 containerd[1460]: 2025-11-01 00:22:01.516 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Namespace="kube-system" Pod="coredns-66bc5c9577-mbcv9" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcea8374-b606-49c4-b6e2-18f85c1c70c0", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"", Pod:"coredns-66bc5c9577-mbcv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22a94cf026c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:01.563831 containerd[1460]: 2025-11-01 00:22:01.516 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.66/32] ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Namespace="kube-system" Pod="coredns-66bc5c9577-mbcv9" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.563831 containerd[1460]: 2025-11-01 00:22:01.516 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22a94cf026c ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Namespace="kube-system" Pod="coredns-66bc5c9577-mbcv9" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.563831 containerd[1460]: 2025-11-01 00:22:01.524 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Namespace="kube-system" Pod="coredns-66bc5c9577-mbcv9" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.564183 containerd[1460]: 2025-11-01 00:22:01.526 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Namespace="kube-system" Pod="coredns-66bc5c9577-mbcv9" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcea8374-b606-49c4-b6e2-18f85c1c70c0", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1", Pod:"coredns-66bc5c9577-mbcv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22a94cf026c", MAC:"1e:5d:db:91:32:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:01.564183 containerd[1460]: 2025-11-01 00:22:01.555 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1" Namespace="kube-system" Pod="coredns-66bc5c9577-mbcv9" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:01.606647 containerd[1460]: time="2025-11-01T00:22:01.605704432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:01.606647 containerd[1460]: time="2025-11-01T00:22:01.605804743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:01.606647 containerd[1460]: time="2025-11-01T00:22:01.605894495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:01.608377 containerd[1460]: time="2025-11-01T00:22:01.608076707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:01.677878 systemd[1]: Started cri-containerd-87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1.scope - libcontainer container 87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1. Nov 1 00:22:01.780136 containerd[1460]: time="2025-11-01T00:22:01.780029132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mbcv9,Uid:bcea8374-b606-49c4-b6e2-18f85c1c70c0,Namespace:kube-system,Attempt:1,} returns sandbox id \"87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1\"" Nov 1 00:22:01.793032 containerd[1460]: time="2025-11-01T00:22:01.792695737Z" level=info msg="CreateContainer within sandbox \"87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:01.821353 containerd[1460]: time="2025-11-01T00:22:01.820969459Z" level=info msg="CreateContainer within sandbox \"87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f320a4be40e233e42095071d92ed932ab8025b703a94433d21c7f055cb5ba329\"" Nov 1 00:22:01.824164 containerd[1460]: time="2025-11-01T00:22:01.823966243Z" level=info msg="StartContainer for \"f320a4be40e233e42095071d92ed932ab8025b703a94433d21c7f055cb5ba329\"" Nov 1 00:22:01.888892 systemd[1]: Started cri-containerd-f320a4be40e233e42095071d92ed932ab8025b703a94433d21c7f055cb5ba329.scope - libcontainer container f320a4be40e233e42095071d92ed932ab8025b703a94433d21c7f055cb5ba329. Nov 1 00:22:01.940349 containerd[1460]: time="2025-11-01T00:22:01.940035598Z" level=info msg="StartContainer for \"f320a4be40e233e42095071d92ed932ab8025b703a94433d21c7f055cb5ba329\" returns successfully" Nov 1 00:22:01.963851 containerd[1460]: time="2025-11-01T00:22:01.963774792Z" level=info msg="StopPodSandbox for \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\"" Nov 1 00:22:01.966184 containerd[1460]: time="2025-11-01T00:22:01.965775875Z" level=info msg="StopPodSandbox for \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\"" Nov 1 00:22:02.022231 systemd-networkd[1357]: cali774cb968d63: Gained IPv6LL Nov 1 00:22:02.206626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1973506069.mount: Deactivated successfully. Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.146 [INFO][4258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.147 [INFO][4258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" iface="eth0" netns="/var/run/netns/cni-43cd787e-b7ac-867b-13dc-b06c434826fd" Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.148 [INFO][4258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" iface="eth0" netns="/var/run/netns/cni-43cd787e-b7ac-867b-13dc-b06c434826fd" Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.151 [INFO][4258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" iface="eth0" netns="/var/run/netns/cni-43cd787e-b7ac-867b-13dc-b06c434826fd" Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.152 [INFO][4258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.152 [INFO][4258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.232 [INFO][4276] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" HandleID="k8s-pod-network.750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.234 [INFO][4276] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.234 [INFO][4276] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.267 [WARNING][4276] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" HandleID="k8s-pod-network.750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.268 [INFO][4276] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" HandleID="k8s-pod-network.750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.271 [INFO][4276] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:02.280973 containerd[1460]: 2025-11-01 00:22:02.275 [INFO][4258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:02.282682 containerd[1460]: time="2025-11-01T00:22:02.282124705Z" level=info msg="TearDown network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\" successfully" Nov 1 00:22:02.282682 containerd[1460]: time="2025-11-01T00:22:02.282166803Z" level=info msg="StopPodSandbox for \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\" returns successfully" Nov 1 00:22:02.292880 containerd[1460]: time="2025-11-01T00:22:02.291793815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-785c977b7d-jc8q5,Uid:047da681-0394-40f3-ae91-7205aadc4ab4,Namespace:calico-system,Attempt:1,}" Nov 1 00:22:02.295331 systemd[1]: run-netns-cni\x2d43cd787e\x2db7ac\x2d867b\x2d13dc\x2db06c434826fd.mount: Deactivated successfully. Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.148 [INFO][4254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.148 [INFO][4254] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" iface="eth0" netns="/var/run/netns/cni-980af03d-c104-d442-45cd-afaf1846bafe" Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.149 [INFO][4254] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" iface="eth0" netns="/var/run/netns/cni-980af03d-c104-d442-45cd-afaf1846bafe" Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.149 [INFO][4254] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" iface="eth0" netns="/var/run/netns/cni-980af03d-c104-d442-45cd-afaf1846bafe" Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.149 [INFO][4254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.149 [INFO][4254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.271 [INFO][4274] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" HandleID="k8s-pod-network.a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.271 [INFO][4274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.273 [INFO][4274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.293 [WARNING][4274] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" HandleID="k8s-pod-network.a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.293 [INFO][4274] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" HandleID="k8s-pod-network.a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.296 [INFO][4274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:02.306991 containerd[1460]: 2025-11-01 00:22:02.301 [INFO][4254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:02.309924 containerd[1460]: time="2025-11-01T00:22:02.308134410Z" level=info msg="TearDown network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\" successfully" Nov 1 00:22:02.309924 containerd[1460]: time="2025-11-01T00:22:02.308198468Z" level=info msg="StopPodSandbox for \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\" returns successfully" Nov 1 00:22:02.315250 containerd[1460]: time="2025-11-01T00:22:02.314755226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dh4gs,Uid:4cb2b081-68eb-4d8c-9ca8-d19766928a32,Namespace:kube-system,Attempt:1,}" Nov 1 00:22:02.321457 systemd[1]: run-netns-cni\x2d980af03d\x2dc104\x2dd442\x2d45cd\x2dafaf1846bafe.mount: Deactivated successfully. Nov 1 00:22:02.423016 kubelet[2603]: I1101 00:22:02.421318 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mbcv9" podStartSLOduration=42.421293797 podStartE2EDuration="42.421293797s" podCreationTimestamp="2025-11-01 00:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:02.368804807 +0000 UTC m=+47.651814676" watchObservedRunningTime="2025-11-01 00:22:02.421293797 +0000 UTC m=+47.704303670" Nov 1 00:22:02.660783 systemd-networkd[1357]: cali4af03041320: Link UP Nov 1 00:22:02.662456 systemd-networkd[1357]: cali4af03041320: Gained carrier Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.476 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0 coredns-66bc5c9577- kube-system 4cb2b081-68eb-4d8c-9ca8-d19766928a32 973 0 2025-11-01 00:21:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9 coredns-66bc5c9577-dh4gs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4af03041320 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Namespace="kube-system" Pod="coredns-66bc5c9577-dh4gs" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.477 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Namespace="kube-system" Pod="coredns-66bc5c9577-dh4gs" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.563 [INFO][4313] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" HandleID="k8s-pod-network.28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.566 [INFO][4313] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" HandleID="k8s-pod-network.28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000359060), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", "pod":"coredns-66bc5c9577-dh4gs", "timestamp":"2025-11-01 00:22:02.563463995 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.566 [INFO][4313] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.566 [INFO][4313] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.566 [INFO][4313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.591 [INFO][4313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.598 [INFO][4313] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.605 [INFO][4313] ipam/ipam.go 511: Trying affinity for 192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.608 [INFO][4313] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.614 [INFO][4313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.614 [INFO][4313] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.618 [INFO][4313] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87 Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.624 [INFO][4313] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.639 [INFO][4313] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.106.67/26] block=192.168.106.64/26 handle="k8s-pod-network.28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.639 [INFO][4313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.67/26] handle="k8s-pod-network.28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.640 [INFO][4313] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:02.704209 containerd[1460]: 2025-11-01 00:22:02.640 [INFO][4313] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.106.67/26] IPv6=[] ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" HandleID="k8s-pod-network.28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.707432 containerd[1460]: 2025-11-01 00:22:02.647 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Namespace="kube-system" Pod="coredns-66bc5c9577-dh4gs" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4cb2b081-68eb-4d8c-9ca8-d19766928a32", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"", Pod:"coredns-66bc5c9577-dh4gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4af03041320", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:02.707432 containerd[1460]: 2025-11-01 00:22:02.648 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.67/32] ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Namespace="kube-system" Pod="coredns-66bc5c9577-dh4gs" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.707432 containerd[1460]: 2025-11-01 00:22:02.648 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4af03041320 ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Namespace="kube-system" Pod="coredns-66bc5c9577-dh4gs" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.707432 containerd[1460]: 2025-11-01 00:22:02.669 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Namespace="kube-system" Pod="coredns-66bc5c9577-dh4gs" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.709973 containerd[1460]: 2025-11-01 00:22:02.671 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Namespace="kube-system" Pod="coredns-66bc5c9577-dh4gs" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4cb2b081-68eb-4d8c-9ca8-d19766928a32", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87", Pod:"coredns-66bc5c9577-dh4gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4af03041320", MAC:"d6:df:9e:71:d3:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:02.709973 containerd[1460]: 2025-11-01 00:22:02.698 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87" Namespace="kube-system" Pod="coredns-66bc5c9577-dh4gs" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:02.760678 containerd[1460]: time="2025-11-01T00:22:02.759905221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:02.760678 containerd[1460]: time="2025-11-01T00:22:02.759995258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:02.760678 containerd[1460]: time="2025-11-01T00:22:02.760041423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:02.760678 containerd[1460]: time="2025-11-01T00:22:02.760188316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:02.805850 systemd[1]: Started cri-containerd-28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87.scope - libcontainer container 28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87. Nov 1 00:22:02.814783 systemd-networkd[1357]: cali8b5906a06ca: Link UP Nov 1 00:22:02.821688 systemd-networkd[1357]: cali8b5906a06ca: Gained carrier Nov 1 00:22:02.854843 systemd-networkd[1357]: vxlan.calico: Gained IPv6LL Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.485 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0 calico-kube-controllers-785c977b7d- calico-system 047da681-0394-40f3-ae91-7205aadc4ab4 974 0 2025-11-01 00:21:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:785c977b7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9 calico-kube-controllers-785c977b7d-jc8q5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8b5906a06ca [] [] }} ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Namespace="calico-system" Pod="calico-kube-controllers-785c977b7d-jc8q5" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.485 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Namespace="calico-system" Pod="calico-kube-controllers-785c977b7d-jc8q5" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.578 [INFO][4319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" HandleID="k8s-pod-network.1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.579 [INFO][4319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" HandleID="k8s-pod-network.1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", "pod":"calico-kube-controllers-785c977b7d-jc8q5", "timestamp":"2025-11-01 00:22:02.578745622 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.582 [INFO][4319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.641 [INFO][4319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.641 [INFO][4319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.695 [INFO][4319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.714 [INFO][4319] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.729 [INFO][4319] ipam/ipam.go 511: Trying affinity for 192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.735 [INFO][4319] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.742 [INFO][4319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.742 [INFO][4319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.746 [INFO][4319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719 Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.768 [INFO][4319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.789 [INFO][4319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.106.68/26] block=192.168.106.64/26 handle="k8s-pod-network.1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.790 [INFO][4319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.68/26] handle="k8s-pod-network.1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.790 [INFO][4319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:02.873104 containerd[1460]: 2025-11-01 00:22:02.790 [INFO][4319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.106.68/26] IPv6=[] ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" HandleID="k8s-pod-network.1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.877115 containerd[1460]: 2025-11-01 00:22:02.799 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Namespace="calico-system" Pod="calico-kube-controllers-785c977b7d-jc8q5" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0", GenerateName:"calico-kube-controllers-785c977b7d-", Namespace:"calico-system", SelfLink:"", UID:"047da681-0394-40f3-ae91-7205aadc4ab4", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"785c977b7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"", Pod:"calico-kube-controllers-785c977b7d-jc8q5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b5906a06ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:02.877115 containerd[1460]: 2025-11-01 00:22:02.799 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.68/32] ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Namespace="calico-system" Pod="calico-kube-controllers-785c977b7d-jc8q5" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.877115 containerd[1460]: 2025-11-01 00:22:02.799 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b5906a06ca ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Namespace="calico-system" Pod="calico-kube-controllers-785c977b7d-jc8q5" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.877115 containerd[1460]: 2025-11-01 00:22:02.824 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Namespace="calico-system" Pod="calico-kube-controllers-785c977b7d-jc8q5" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.877115 containerd[1460]: 2025-11-01 00:22:02.827 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Namespace="calico-system" Pod="calico-kube-controllers-785c977b7d-jc8q5" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0", GenerateName:"calico-kube-controllers-785c977b7d-", Namespace:"calico-system", SelfLink:"", UID:"047da681-0394-40f3-ae91-7205aadc4ab4", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"785c977b7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719", Pod:"calico-kube-controllers-785c977b7d-jc8q5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b5906a06ca", MAC:"9a:c1:d2:6b:45:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:02.877115 containerd[1460]: 2025-11-01 00:22:02.867 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719" Namespace="calico-system" Pod="calico-kube-controllers-785c977b7d-jc8q5" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:02.918111 systemd-networkd[1357]: cali22a94cf026c: Gained IPv6LL Nov 1 00:22:02.949166 containerd[1460]: time="2025-11-01T00:22:02.946364539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:02.949166 containerd[1460]: time="2025-11-01T00:22:02.946487603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:02.949166 containerd[1460]: time="2025-11-01T00:22:02.946516149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:02.950852 containerd[1460]: time="2025-11-01T00:22:02.950005506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:03.005392 containerd[1460]: time="2025-11-01T00:22:03.005319807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dh4gs,Uid:4cb2b081-68eb-4d8c-9ca8-d19766928a32,Namespace:kube-system,Attempt:1,} returns sandbox id \"28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87\"" Nov 1 00:22:03.007832 systemd[1]: Started cri-containerd-1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719.scope - libcontainer container 1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719. Nov 1 00:22:03.018448 containerd[1460]: time="2025-11-01T00:22:03.018153615Z" level=info msg="CreateContainer within sandbox \"28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:03.051297 containerd[1460]: time="2025-11-01T00:22:03.051248066Z" level=info msg="CreateContainer within sandbox \"28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7734f4ecf02590b6fa159c25730c05485561398a117ab2992917377dc1355bb4\"" Nov 1 00:22:03.052651 containerd[1460]: time="2025-11-01T00:22:03.052414465Z" level=info msg="StartContainer for \"7734f4ecf02590b6fa159c25730c05485561398a117ab2992917377dc1355bb4\"" Nov 1 00:22:03.116565 systemd[1]: Started cri-containerd-7734f4ecf02590b6fa159c25730c05485561398a117ab2992917377dc1355bb4.scope - libcontainer container 7734f4ecf02590b6fa159c25730c05485561398a117ab2992917377dc1355bb4. Nov 1 00:22:03.184930 containerd[1460]: time="2025-11-01T00:22:03.183218900Z" level=info msg="StartContainer for \"7734f4ecf02590b6fa159c25730c05485561398a117ab2992917377dc1355bb4\" returns successfully" Nov 1 00:22:03.291856 containerd[1460]: time="2025-11-01T00:22:03.291793011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-785c977b7d-jc8q5,Uid:047da681-0394-40f3-ae91-7205aadc4ab4,Namespace:calico-system,Attempt:1,} returns sandbox id \"1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719\"" Nov 1 00:22:03.296522 containerd[1460]: time="2025-11-01T00:22:03.295796828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:22:03.353018 kubelet[2603]: I1101 00:22:03.352881 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dh4gs" podStartSLOduration=43.35285723 podStartE2EDuration="43.35285723s" podCreationTimestamp="2025-11-01 00:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:03.352655694 +0000 UTC m=+48.635665562" watchObservedRunningTime="2025-11-01 00:22:03.35285723 +0000 UTC m=+48.635867101" Nov 1 00:22:03.493884 containerd[1460]: time="2025-11-01T00:22:03.493564626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:03.495618 containerd[1460]: time="2025-11-01T00:22:03.495534974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:22:03.496673 containerd[1460]: time="2025-11-01T00:22:03.495550084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:03.496831 kubelet[2603]: E1101 00:22:03.495950 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:03.496831 kubelet[2603]: E1101 00:22:03.496014 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:03.496831 kubelet[2603]: E1101 00:22:03.496129 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-785c977b7d-jc8q5_calico-system(047da681-0394-40f3-ae91-7205aadc4ab4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:03.496831 kubelet[2603]: E1101 00:22:03.496183 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:22:03.962507 containerd[1460]: time="2025-11-01T00:22:03.961329962Z" level=info msg="StopPodSandbox for \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\"" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.029 [INFO][4505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.029 [INFO][4505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" iface="eth0" netns="/var/run/netns/cni-05366211-aa3d-a58a-80c5-48a2e897c901" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.031 [INFO][4505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" iface="eth0" netns="/var/run/netns/cni-05366211-aa3d-a58a-80c5-48a2e897c901" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.032 [INFO][4505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" iface="eth0" netns="/var/run/netns/cni-05366211-aa3d-a58a-80c5-48a2e897c901" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.032 [INFO][4505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.032 [INFO][4505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.070 [INFO][4512] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" HandleID="k8s-pod-network.d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.071 [INFO][4512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.071 [INFO][4512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.079 [WARNING][4512] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" HandleID="k8s-pod-network.d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.079 [INFO][4512] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" HandleID="k8s-pod-network.d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.082 [INFO][4512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:04.086575 containerd[1460]: 2025-11-01 00:22:04.084 [INFO][4505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:04.089154 containerd[1460]: time="2025-11-01T00:22:04.086863959Z" level=info msg="TearDown network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\" successfully" Nov 1 00:22:04.089154 containerd[1460]: time="2025-11-01T00:22:04.086919922Z" level=info msg="StopPodSandbox for \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\" returns successfully" Nov 1 00:22:04.093157 systemd[1]: run-netns-cni\x2d05366211\x2daa3d\x2da58a\x2d80c5\x2d48a2e897c901.mount: Deactivated successfully. Nov 1 00:22:04.095119 containerd[1460]: time="2025-11-01T00:22:04.094254915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4c78ffc-92jbd,Uid:160577c0-dc7a-4380-a5c7-096e0298b76b,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:22:04.265782 systemd-networkd[1357]: cali013bdbd72e8: Link UP Nov 1 00:22:04.268561 systemd-networkd[1357]: cali013bdbd72e8: Gained carrier Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.171 [INFO][4519] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0 calico-apiserver-55b4c78ffc- calico-apiserver 160577c0-dc7a-4380-a5c7-096e0298b76b 1007 0 2025-11-01 00:21:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55b4c78ffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9 calico-apiserver-55b4c78ffc-92jbd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali013bdbd72e8 [] [] }} ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-92jbd" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.172 [INFO][4519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-92jbd" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.211 [INFO][4531] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" HandleID="k8s-pod-network.e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.212 [INFO][4531] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" HandleID="k8s-pod-network.e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", "pod":"calico-apiserver-55b4c78ffc-92jbd", "timestamp":"2025-11-01 00:22:04.211751009 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.212 [INFO][4531] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.212 [INFO][4531] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.212 [INFO][4531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.223 [INFO][4531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.228 [INFO][4531] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.235 [INFO][4531] ipam/ipam.go 511: Trying affinity for 192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.238 [INFO][4531] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.241 [INFO][4531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.241 [INFO][4531] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.243 [INFO][4531] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218 Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.248 [INFO][4531] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.258 [INFO][4531] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.106.69/26] block=192.168.106.64/26 handle="k8s-pod-network.e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.258 [INFO][4531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.69/26] handle="k8s-pod-network.e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.258 [INFO][4531] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:04.288525 containerd[1460]: 2025-11-01 00:22:04.258 [INFO][4531] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.106.69/26] IPv6=[] ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" HandleID="k8s-pod-network.e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.291322 containerd[1460]: 2025-11-01 00:22:04.261 [INFO][4519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-92jbd" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0", GenerateName:"calico-apiserver-55b4c78ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"160577c0-dc7a-4380-a5c7-096e0298b76b", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4c78ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"", Pod:"calico-apiserver-55b4c78ffc-92jbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali013bdbd72e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:04.291322 containerd[1460]: 2025-11-01 00:22:04.261 [INFO][4519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.69/32] ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-92jbd" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.291322 containerd[1460]: 2025-11-01 00:22:04.261 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali013bdbd72e8 ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-92jbd" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.291322 containerd[1460]: 2025-11-01 00:22:04.264 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-92jbd" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.291322 containerd[1460]: 2025-11-01 00:22:04.265 [INFO][4519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-92jbd" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0", GenerateName:"calico-apiserver-55b4c78ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"160577c0-dc7a-4380-a5c7-096e0298b76b", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4c78ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218", Pod:"calico-apiserver-55b4c78ffc-92jbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali013bdbd72e8", MAC:"86:e0:5e:e9:6c:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:04.291322 containerd[1460]: 2025-11-01 00:22:04.283 [INFO][4519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-92jbd" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:04.335125 containerd[1460]: time="2025-11-01T00:22:04.334982392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:04.336744 containerd[1460]: time="2025-11-01T00:22:04.335076445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:04.336744 containerd[1460]: time="2025-11-01T00:22:04.335105723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:04.336744 containerd[1460]: time="2025-11-01T00:22:04.335265646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:04.363305 kubelet[2603]: E1101 00:22:04.361755 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:22:04.407817 systemd[1]: Started cri-containerd-e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218.scope - libcontainer container e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218. Nov 1 00:22:04.544140 containerd[1460]: time="2025-11-01T00:22:04.543985764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4c78ffc-92jbd,Uid:160577c0-dc7a-4380-a5c7-096e0298b76b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218\"" Nov 1 00:22:04.549398 containerd[1460]: time="2025-11-01T00:22:04.548994409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:04.646560 systemd-networkd[1357]: cali4af03041320: Gained IPv6LL Nov 1 00:22:04.647078 systemd-networkd[1357]: cali8b5906a06ca: Gained IPv6LL Nov 1 00:22:04.759296 containerd[1460]: time="2025-11-01T00:22:04.759210605Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:04.761109 containerd[1460]: time="2025-11-01T00:22:04.761045961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:04.761363 containerd[1460]: time="2025-11-01T00:22:04.761079065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:04.761456 kubelet[2603]: E1101 00:22:04.761383 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:04.763731 kubelet[2603]: E1101 00:22:04.761455 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:04.763731 kubelet[2603]: E1101 00:22:04.761559 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-55b4c78ffc-92jbd_calico-apiserver(160577c0-dc7a-4380-a5c7-096e0298b76b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:04.763731 kubelet[2603]: E1101 00:22:04.761643 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:22:04.963289 containerd[1460]: time="2025-11-01T00:22:04.963053457Z" level=info msg="StopPodSandbox for \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\"" Nov 1 00:22:04.965782 containerd[1460]: time="2025-11-01T00:22:04.964229138Z" level=info msg="StopPodSandbox for \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\"" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.051 [INFO][4606] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.051 [INFO][4606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" iface="eth0" netns="/var/run/netns/cni-3e20057f-5273-3aad-aff8-30fadee37b2d" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.052 [INFO][4606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" iface="eth0" netns="/var/run/netns/cni-3e20057f-5273-3aad-aff8-30fadee37b2d" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.054 [INFO][4606] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" iface="eth0" netns="/var/run/netns/cni-3e20057f-5273-3aad-aff8-30fadee37b2d" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.054 [INFO][4606] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.054 [INFO][4606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.110 [INFO][4619] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" HandleID="k8s-pod-network.3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.110 [INFO][4619] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.110 [INFO][4619] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.120 [WARNING][4619] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" HandleID="k8s-pod-network.3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.120 [INFO][4619] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" HandleID="k8s-pod-network.3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.122 [INFO][4619] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:05.127736 containerd[1460]: 2025-11-01 00:22:05.124 [INFO][4606] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:05.130951 containerd[1460]: time="2025-11-01T00:22:05.130678781Z" level=info msg="TearDown network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\" successfully" Nov 1 00:22:05.132643 containerd[1460]: time="2025-11-01T00:22:05.130760028Z" level=info msg="StopPodSandbox for \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\" returns successfully" Nov 1 00:22:05.133110 systemd[1]: run-netns-cni\x2d3e20057f\x2d5273\x2d3aad\x2daff8\x2d30fadee37b2d.mount: Deactivated successfully. Nov 1 00:22:05.139811 containerd[1460]: time="2025-11-01T00:22:05.139671141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gflnh,Uid:4b15d46f-f330-471a-8fc4-3dc35af1a685,Namespace:calico-system,Attempt:1,}" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.058 [INFO][4605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.058 [INFO][4605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" iface="eth0" netns="/var/run/netns/cni-5b4325d9-b30d-b796-a33f-be3414585b45" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.060 [INFO][4605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" iface="eth0" netns="/var/run/netns/cni-5b4325d9-b30d-b796-a33f-be3414585b45" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.062 [INFO][4605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" iface="eth0" netns="/var/run/netns/cni-5b4325d9-b30d-b796-a33f-be3414585b45" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.063 [INFO][4605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.063 [INFO][4605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.113 [INFO][4621] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" HandleID="k8s-pod-network.001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.113 [INFO][4621] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.122 [INFO][4621] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.142 [WARNING][4621] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" HandleID="k8s-pod-network.001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.142 [INFO][4621] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" HandleID="k8s-pod-network.001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.145 [INFO][4621] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:05.150264 containerd[1460]: 2025-11-01 00:22:05.148 [INFO][4605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:05.151649 containerd[1460]: time="2025-11-01T00:22:05.150699690Z" level=info msg="TearDown network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\" successfully" Nov 1 00:22:05.151649 containerd[1460]: time="2025-11-01T00:22:05.150737384Z" level=info msg="StopPodSandbox for \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\" returns successfully" Nov 1 00:22:05.158131 systemd[1]: run-netns-cni\x2d5b4325d9\x2db30d\x2db796\x2da33f\x2dbe3414585b45.mount: Deactivated successfully. Nov 1 00:22:05.163694 containerd[1460]: time="2025-11-01T00:22:05.163636840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvqzr,Uid:a13cec52-774e-41dd-8b73-7a0c3559c1e0,Namespace:calico-system,Attempt:1,}" Nov 1 00:22:05.352508 systemd-networkd[1357]: cali013bdbd72e8: Gained IPv6LL Nov 1 00:22:05.389865 kubelet[2603]: E1101 00:22:05.388831 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:22:05.530783 systemd-networkd[1357]: cali4f496f455e7: Link UP Nov 1 00:22:05.534719 systemd-networkd[1357]: cali4f496f455e7: Gained carrier Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.284 [INFO][4632] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0 goldmane-7c778bb748- calico-system 4b15d46f-f330-471a-8fc4-3dc35af1a685 1021 0 2025-11-01 00:21:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9 goldmane-7c778bb748-gflnh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4f496f455e7 [] [] }} ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Namespace="calico-system" Pod="goldmane-7c778bb748-gflnh" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.284 [INFO][4632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Namespace="calico-system" Pod="goldmane-7c778bb748-gflnh" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.434 [INFO][4657] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" HandleID="k8s-pod-network.8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.434 [INFO][4657] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" HandleID="k8s-pod-network.8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", "pod":"goldmane-7c778bb748-gflnh", "timestamp":"2025-11-01 00:22:05.434096397 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.434 [INFO][4657] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.434 [INFO][4657] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.434 [INFO][4657] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.470 [INFO][4657] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.481 [INFO][4657] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.489 [INFO][4657] ipam/ipam.go 511: Trying affinity for 192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.493 [INFO][4657] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.498 [INFO][4657] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.498 [INFO][4657] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.501 [INFO][4657] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54 Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.508 [INFO][4657] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.518 [INFO][4657] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.106.70/26] block=192.168.106.64/26 handle="k8s-pod-network.8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.518 [INFO][4657] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.70/26] handle="k8s-pod-network.8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.518 [INFO][4657] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:05.563740 containerd[1460]: 2025-11-01 00:22:05.518 [INFO][4657] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.106.70/26] IPv6=[] ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" HandleID="k8s-pod-network.8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.565884 containerd[1460]: 2025-11-01 00:22:05.522 [INFO][4632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Namespace="calico-system" Pod="goldmane-7c778bb748-gflnh" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4b15d46f-f330-471a-8fc4-3dc35af1a685", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"", Pod:"goldmane-7c778bb748-gflnh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f496f455e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:05.565884 containerd[1460]: 2025-11-01 00:22:05.523 [INFO][4632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.70/32] ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Namespace="calico-system" Pod="goldmane-7c778bb748-gflnh" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.565884 containerd[1460]: 2025-11-01 00:22:05.523 [INFO][4632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f496f455e7 ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Namespace="calico-system" Pod="goldmane-7c778bb748-gflnh" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.565884 containerd[1460]: 2025-11-01 00:22:05.536 [INFO][4632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Namespace="calico-system" Pod="goldmane-7c778bb748-gflnh" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.565884 containerd[1460]: 2025-11-01 00:22:05.538 [INFO][4632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Namespace="calico-system" Pod="goldmane-7c778bb748-gflnh" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4b15d46f-f330-471a-8fc4-3dc35af1a685", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54", Pod:"goldmane-7c778bb748-gflnh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f496f455e7", MAC:"ba:c3:4f:63:37:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:05.565884 containerd[1460]: 2025-11-01 00:22:05.560 [INFO][4632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54" Namespace="calico-system" Pod="goldmane-7c778bb748-gflnh" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:05.628006 containerd[1460]: time="2025-11-01T00:22:05.625663498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:05.635415 containerd[1460]: time="2025-11-01T00:22:05.630768216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:05.635415 containerd[1460]: time="2025-11-01T00:22:05.630809858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:05.635415 containerd[1460]: time="2025-11-01T00:22:05.630964587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:05.663044 systemd-networkd[1357]: cali8f8734ea2c7: Link UP Nov 1 00:22:05.666406 systemd-networkd[1357]: cali8f8734ea2c7: Gained carrier Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.337 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0 csi-node-driver- calico-system a13cec52-774e-41dd-8b73-7a0c3559c1e0 1022 0 2025-11-01 00:21:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9 csi-node-driver-cvqzr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f8734ea2c7 [] [] }} ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Namespace="calico-system" Pod="csi-node-driver-cvqzr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.338 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Namespace="calico-system" Pod="csi-node-driver-cvqzr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.474 [INFO][4663] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" HandleID="k8s-pod-network.e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.475 [INFO][4663] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" HandleID="k8s-pod-network.e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", "pod":"csi-node-driver-cvqzr", "timestamp":"2025-11-01 00:22:05.474935882 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.476 [INFO][4663] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.518 [INFO][4663] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.519 [INFO][4663] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.569 [INFO][4663] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.582 [INFO][4663] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.591 [INFO][4663] ipam/ipam.go 511: Trying affinity for 192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.597 [INFO][4663] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.602 [INFO][4663] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.602 [INFO][4663] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.606 [INFO][4663] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.619 [INFO][4663] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.643 [INFO][4663] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.106.71/26] block=192.168.106.64/26 handle="k8s-pod-network.e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.643 [INFO][4663] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.71/26] handle="k8s-pod-network.e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.643 [INFO][4663] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:05.715550 containerd[1460]: 2025-11-01 00:22:05.644 [INFO][4663] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.106.71/26] IPv6=[] ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" HandleID="k8s-pod-network.e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.715185 systemd[1]: Started cri-containerd-8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54.scope - libcontainer container 8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54. Nov 1 00:22:05.716848 containerd[1460]: 2025-11-01 00:22:05.655 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Namespace="calico-system" Pod="csi-node-driver-cvqzr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a13cec52-774e-41dd-8b73-7a0c3559c1e0", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"", Pod:"csi-node-driver-cvqzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f8734ea2c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:05.716848 containerd[1460]: 2025-11-01 00:22:05.656 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.71/32] ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Namespace="calico-system" Pod="csi-node-driver-cvqzr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.716848 containerd[1460]: 2025-11-01 00:22:05.656 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f8734ea2c7 ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Namespace="calico-system" Pod="csi-node-driver-cvqzr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.716848 containerd[1460]: 2025-11-01 00:22:05.670 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Namespace="calico-system" Pod="csi-node-driver-cvqzr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.716848 containerd[1460]: 2025-11-01 00:22:05.671 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Namespace="calico-system" Pod="csi-node-driver-cvqzr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a13cec52-774e-41dd-8b73-7a0c3559c1e0", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d", Pod:"csi-node-driver-cvqzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f8734ea2c7", MAC:"e2:a1:f8:6e:ca:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:05.716848 containerd[1460]: 2025-11-01 00:22:05.701 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d" Namespace="calico-system" Pod="csi-node-driver-cvqzr" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:05.781684 containerd[1460]: time="2025-11-01T00:22:05.780884430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:05.781684 containerd[1460]: time="2025-11-01T00:22:05.781334138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:05.781684 containerd[1460]: time="2025-11-01T00:22:05.781376827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:05.785301 containerd[1460]: time="2025-11-01T00:22:05.784969042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:05.840316 containerd[1460]: time="2025-11-01T00:22:05.840151397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-gflnh,Uid:4b15d46f-f330-471a-8fc4-3dc35af1a685,Namespace:calico-system,Attempt:1,} returns sandbox id \"8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54\"" Nov 1 00:22:05.845118 containerd[1460]: time="2025-11-01T00:22:05.844748600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:22:05.847014 systemd[1]: Started cri-containerd-e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d.scope - libcontainer container e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d. Nov 1 00:22:05.896792 containerd[1460]: time="2025-11-01T00:22:05.896549902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cvqzr,Uid:a13cec52-774e-41dd-8b73-7a0c3559c1e0,Namespace:calico-system,Attempt:1,} returns sandbox id \"e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d\"" Nov 1 00:22:05.962685 containerd[1460]: time="2025-11-01T00:22:05.961503508Z" level=info msg="StopPodSandbox for \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\"" Nov 1 00:22:06.061635 containerd[1460]: time="2025-11-01T00:22:06.059864619Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:06.062230 containerd[1460]: time="2025-11-01T00:22:06.061695713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:22:06.062230 containerd[1460]: time="2025-11-01T00:22:06.061766273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:06.062341 kubelet[2603]: E1101 00:22:06.062133 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:06.062341 kubelet[2603]: E1101 00:22:06.062196 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:06.062920 kubelet[2603]: E1101 00:22:06.062440 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-gflnh_calico-system(4b15d46f-f330-471a-8fc4-3dc35af1a685): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:06.062920 kubelet[2603]: E1101 00:22:06.062492 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:22:06.063035 containerd[1460]: time="2025-11-01T00:22:06.062987947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.023 [INFO][4783] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.023 [INFO][4783] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" iface="eth0" netns="/var/run/netns/cni-df971839-b69e-bf94-4907-ed60287599b5" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.024 [INFO][4783] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" iface="eth0" netns="/var/run/netns/cni-df971839-b69e-bf94-4907-ed60287599b5" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.025 [INFO][4783] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" iface="eth0" netns="/var/run/netns/cni-df971839-b69e-bf94-4907-ed60287599b5" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.025 [INFO][4783] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.025 [INFO][4783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.070 [INFO][4791] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" HandleID="k8s-pod-network.d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.070 [INFO][4791] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.070 [INFO][4791] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.083 [WARNING][4791] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" HandleID="k8s-pod-network.d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.083 [INFO][4791] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" HandleID="k8s-pod-network.d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.085 [INFO][4791] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:06.089109 containerd[1460]: 2025-11-01 00:22:06.086 [INFO][4783] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:06.089971 containerd[1460]: time="2025-11-01T00:22:06.089914760Z" level=info msg="TearDown network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\" successfully" Nov 1 00:22:06.089971 containerd[1460]: time="2025-11-01T00:22:06.089971089Z" level=info msg="StopPodSandbox for \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\" returns successfully" Nov 1 00:22:06.094517 containerd[1460]: time="2025-11-01T00:22:06.094461437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4c78ffc-tjjrz,Uid:af0d3006-44cf-49fd-af6f-37984237612e,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:22:06.252872 systemd-networkd[1357]: cali9b5c0f4195c: Link UP Nov 1 00:22:06.257059 systemd-networkd[1357]: cali9b5c0f4195c: Gained carrier Nov 1 00:22:06.269805 containerd[1460]: time="2025-11-01T00:22:06.269750183Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:06.272111 containerd[1460]: time="2025-11-01T00:22:06.271884625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:22:06.272111 containerd[1460]: time="2025-11-01T00:22:06.271926283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:22:06.272306 kubelet[2603]: E1101 00:22:06.272243 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:06.272375 kubelet[2603]: E1101 00:22:06.272305 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:06.272429 kubelet[2603]: E1101 00:22:06.272408 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:06.275652 containerd[1460]: time="2025-11-01T00:22:06.275375766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.158 [INFO][4799] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0 calico-apiserver-55b4c78ffc- calico-apiserver af0d3006-44cf-49fd-af6f-37984237612e 1044 0 2025-11-01 00:21:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55b4c78ffc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9 calico-apiserver-55b4c78ffc-tjjrz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b5c0f4195c [] [] }} ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-tjjrz" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.159 [INFO][4799] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-tjjrz" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.192 [INFO][4810] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" HandleID="k8s-pod-network.369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.192 [INFO][4810] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" HandleID="k8s-pod-network.369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", "pod":"calico-apiserver-55b4c78ffc-tjjrz", "timestamp":"2025-11-01 00:22:06.192422996 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.192 [INFO][4810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.192 [INFO][4810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.192 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9' Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.205 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.211 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.216 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.218 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.221 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.64/26 host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.221 [INFO][4810] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.106.64/26 handle="k8s-pod-network.369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.224 [INFO][4810] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.230 [INFO][4810] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.106.64/26 handle="k8s-pod-network.369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.239 [INFO][4810] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.106.72/26] block=192.168.106.64/26 handle="k8s-pod-network.369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.239 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.72/26] handle="k8s-pod-network.369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" host="ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9" Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.239 [INFO][4810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:06.289097 containerd[1460]: 2025-11-01 00:22:06.239 [INFO][4810] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.106.72/26] IPv6=[] ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" HandleID="k8s-pod-network.369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.292566 containerd[1460]: 2025-11-01 00:22:06.243 [INFO][4799] cni-plugin/k8s.go 418: Populated endpoint ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-tjjrz" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0", GenerateName:"calico-apiserver-55b4c78ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"af0d3006-44cf-49fd-af6f-37984237612e", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4c78ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"", Pod:"calico-apiserver-55b4c78ffc-tjjrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b5c0f4195c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:06.292566 containerd[1460]: 2025-11-01 00:22:06.244 [INFO][4799] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.72/32] ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-tjjrz" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.292566 containerd[1460]: 2025-11-01 00:22:06.244 [INFO][4799] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b5c0f4195c ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-tjjrz" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.292566 containerd[1460]: 2025-11-01 00:22:06.255 [INFO][4799] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-tjjrz" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.292566 containerd[1460]: 2025-11-01 00:22:06.258 [INFO][4799] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-tjjrz" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0", GenerateName:"calico-apiserver-55b4c78ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"af0d3006-44cf-49fd-af6f-37984237612e", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4c78ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc", Pod:"calico-apiserver-55b4c78ffc-tjjrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b5c0f4195c", MAC:"a2:d4:fd:34:cf:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:06.292566 containerd[1460]: 2025-11-01 00:22:06.283 [INFO][4799] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc" Namespace="calico-apiserver" Pod="calico-apiserver-55b4c78ffc-tjjrz" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:06.326757 containerd[1460]: time="2025-11-01T00:22:06.326237988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:06.326757 containerd[1460]: time="2025-11-01T00:22:06.326330304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:06.326757 containerd[1460]: time="2025-11-01T00:22:06.326357269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:06.327512 containerd[1460]: time="2025-11-01T00:22:06.326577667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:06.369566 systemd[1]: run-netns-cni\x2ddf971839\x2db69e\x2dbf94\x2d4907\x2ded60287599b5.mount: Deactivated successfully. Nov 1 00:22:06.387482 kubelet[2603]: E1101 00:22:06.387414 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:22:06.391064 systemd[1]: Started cri-containerd-369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc.scope - libcontainer container 369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc. Nov 1 00:22:06.395913 kubelet[2603]: E1101 00:22:06.395836 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:22:06.505573 containerd[1460]: time="2025-11-01T00:22:06.505282855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:06.508523 containerd[1460]: time="2025-11-01T00:22:06.507339809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b4c78ffc-tjjrz,Uid:af0d3006-44cf-49fd-af6f-37984237612e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc\"" Nov 1 00:22:06.512159 containerd[1460]: time="2025-11-01T00:22:06.511995263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:22:06.512159 containerd[1460]: time="2025-11-01T00:22:06.512086310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:22:06.512760 kubelet[2603]: E1101 00:22:06.512564 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:06.512890 kubelet[2603]: E1101 00:22:06.512773 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:06.513845 kubelet[2603]: E1101 00:22:06.513803 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:06.514020 kubelet[2603]: E1101 00:22:06.513881 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:22:06.515930 containerd[1460]: time="2025-11-01T00:22:06.515887221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:06.718862 containerd[1460]: time="2025-11-01T00:22:06.718796764Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:06.720596 containerd[1460]: time="2025-11-01T00:22:06.720536397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:06.720757 containerd[1460]: time="2025-11-01T00:22:06.720546483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:06.721040 kubelet[2603]: E1101 00:22:06.720949 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:06.721174 kubelet[2603]: E1101 00:22:06.721057 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:06.721236 kubelet[2603]: E1101 00:22:06.721184 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-55b4c78ffc-tjjrz_calico-apiserver(af0d3006-44cf-49fd-af6f-37984237612e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:06.721294 kubelet[2603]: E1101 00:22:06.721237 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:22:06.822503 systemd-networkd[1357]: cali4f496f455e7: Gained IPv6LL Nov 1 00:22:07.399642 kubelet[2603]: E1101 00:22:07.399499 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:22:07.403189 kubelet[2603]: E1101 00:22:07.403134 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:22:07.403769 kubelet[2603]: E1101 00:22:07.403701 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:22:07.654898 systemd-networkd[1357]: cali8f8734ea2c7: Gained IPv6LL Nov 1 00:22:08.166018 systemd-networkd[1357]: cali9b5c0f4195c: Gained IPv6LL Nov 1 00:22:08.401206 kubelet[2603]: E1101 00:22:08.400691 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:22:10.635716 ntpd[1429]: Listen normally on 8 vxlan.calico 192.168.106.64:123 Nov 1 00:22:10.635849 ntpd[1429]: Listen normally on 9 cali774cb968d63 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 8 vxlan.calico 192.168.106.64:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 9 cali774cb968d63 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 10 vxlan.calico [fe80::6415:18ff:fecd:5f61%5]:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 11 cali22a94cf026c [fe80::ecee:eeff:feee:eeee%8]:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 12 cali4af03041320 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 13 cali8b5906a06ca [fe80::ecee:eeff:feee:eeee%10]:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 14 cali013bdbd72e8 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 15 cali4f496f455e7 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 16 cali8f8734ea2c7 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 1 00:22:10.636412 ntpd[1429]: 1 Nov 00:22:10 ntpd[1429]: Listen normally on 17 cali9b5c0f4195c [fe80::ecee:eeff:feee:eeee%14]:123 Nov 1 00:22:10.635954 ntpd[1429]: Listen normally on 10 vxlan.calico [fe80::6415:18ff:fecd:5f61%5]:123 Nov 1 00:22:10.636024 ntpd[1429]: Listen normally on 11 cali22a94cf026c [fe80::ecee:eeff:feee:eeee%8]:123 Nov 1 00:22:10.636083 ntpd[1429]: Listen normally on 12 cali4af03041320 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 1 00:22:10.636145 ntpd[1429]: Listen normally on 13 cali8b5906a06ca [fe80::ecee:eeff:feee:eeee%10]:123 Nov 1 00:22:10.636201 ntpd[1429]: Listen normally on 14 cali013bdbd72e8 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 1 00:22:10.636256 ntpd[1429]: Listen normally on 15 cali4f496f455e7 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 1 00:22:10.636310 ntpd[1429]: Listen normally on 16 cali8f8734ea2c7 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 1 00:22:10.636368 ntpd[1429]: Listen normally on 17 cali9b5c0f4195c [fe80::ecee:eeff:feee:eeee%14]:123 Nov 1 00:22:13.963675 containerd[1460]: time="2025-11-01T00:22:13.963470959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:22:14.171181 containerd[1460]: time="2025-11-01T00:22:14.171114505Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:14.173451 containerd[1460]: time="2025-11-01T00:22:14.173387337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:22:14.173655 containerd[1460]: time="2025-11-01T00:22:14.173505467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:22:14.173853 kubelet[2603]: E1101 00:22:14.173790 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:14.174550 kubelet[2603]: E1101 00:22:14.173857 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:14.174550 kubelet[2603]: E1101 00:22:14.173988 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b64bcb7c8-98hnq_calico-system(d224b46c-00ee-4398-aed6-0fb0a4fe6275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:14.176184 containerd[1460]: time="2025-11-01T00:22:14.176150479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:22:14.397391 containerd[1460]: time="2025-11-01T00:22:14.397329494Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:14.399386 containerd[1460]: time="2025-11-01T00:22:14.399300394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:22:14.400073 containerd[1460]: time="2025-11-01T00:22:14.399365545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:14.400143 kubelet[2603]: E1101 00:22:14.399661 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:14.400143 kubelet[2603]: E1101 00:22:14.399725 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:14.400143 kubelet[2603]: E1101 00:22:14.399837 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b64bcb7c8-98hnq_calico-system(d224b46c-00ee-4398-aed6-0fb0a4fe6275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:14.400335 kubelet[2603]: E1101 00:22:14.399896 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b64bcb7c8-98hnq" podUID="d224b46c-00ee-4398-aed6-0fb0a4fe6275" Nov 1 00:22:14.906737 containerd[1460]: time="2025-11-01T00:22:14.906684119Z" level=info msg="StopPodSandbox for \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\"" Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:14.955 [WARNING][4891] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4cb2b081-68eb-4d8c-9ca8-d19766928a32", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87", Pod:"coredns-66bc5c9577-dh4gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4af03041320", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:14.956 [INFO][4891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:14.956 [INFO][4891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" iface="eth0" netns="" Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:14.956 [INFO][4891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:14.956 [INFO][4891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:14.992 [INFO][4898] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" HandleID="k8s-pod-network.a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:14.992 [INFO][4898] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:14.992 [INFO][4898] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:15.003 [WARNING][4898] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" HandleID="k8s-pod-network.a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:15.003 [INFO][4898] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" HandleID="k8s-pod-network.a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:15.004 [INFO][4898] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.008390 containerd[1460]: 2025-11-01 00:22:15.006 [INFO][4891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:15.008390 containerd[1460]: time="2025-11-01T00:22:15.008234615Z" level=info msg="TearDown network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\" successfully" Nov 1 00:22:15.008390 containerd[1460]: time="2025-11-01T00:22:15.008263766Z" level=info msg="StopPodSandbox for \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\" returns successfully" Nov 1 00:22:15.010388 containerd[1460]: time="2025-11-01T00:22:15.009131920Z" level=info msg="RemovePodSandbox for \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\"" Nov 1 00:22:15.010388 containerd[1460]: time="2025-11-01T00:22:15.009176352Z" level=info msg="Forcibly stopping sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\"" Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.074 [WARNING][4915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4cb2b081-68eb-4d8c-9ca8-d19766928a32", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"28d8ce802c894e52cb14b020739616aca52d75178206d795c9aab3b487ba9f87", Pod:"coredns-66bc5c9577-dh4gs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4af03041320", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.074 [INFO][4915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.074 [INFO][4915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" iface="eth0" netns="" Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.074 [INFO][4915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.074 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.104 [INFO][4923] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" HandleID="k8s-pod-network.a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.104 [INFO][4923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.104 [INFO][4923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.112 [WARNING][4923] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" HandleID="k8s-pod-network.a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.113 [INFO][4923] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" HandleID="k8s-pod-network.a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--dh4gs-eth0" Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.115 [INFO][4923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.118675 containerd[1460]: 2025-11-01 00:22:15.116 [INFO][4915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb" Nov 1 00:22:15.119481 containerd[1460]: time="2025-11-01T00:22:15.118799442Z" level=info msg="TearDown network for sandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\" successfully" Nov 1 00:22:15.124296 containerd[1460]: time="2025-11-01T00:22:15.124237379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:15.124508 containerd[1460]: time="2025-11-01T00:22:15.124332929Z" level=info msg="RemovePodSandbox \"a5f1ba62b9d6313681f907f78cc1e24a609028b37ed69067baec6e5b0dc1c6cb\" returns successfully" Nov 1 00:22:15.125527 containerd[1460]: time="2025-11-01T00:22:15.125097550Z" level=info msg="StopPodSandbox for \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\"" Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.169 [WARNING][4938] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.170 [INFO][4938] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.170 [INFO][4938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" iface="eth0" netns="" Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.170 [INFO][4938] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.170 [INFO][4938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.197 [INFO][4945] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" HandleID="k8s-pod-network.18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.197 [INFO][4945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.197 [INFO][4945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.207 [WARNING][4945] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" HandleID="k8s-pod-network.18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.207 [INFO][4945] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" HandleID="k8s-pod-network.18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.209 [INFO][4945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.212939 containerd[1460]: 2025-11-01 00:22:15.211 [INFO][4938] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:22:15.213707 containerd[1460]: time="2025-11-01T00:22:15.213654684Z" level=info msg="TearDown network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\" successfully" Nov 1 00:22:15.213785 containerd[1460]: time="2025-11-01T00:22:15.213706639Z" level=info msg="StopPodSandbox for \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\" returns successfully" Nov 1 00:22:15.214932 containerd[1460]: time="2025-11-01T00:22:15.214453309Z" level=info msg="RemovePodSandbox for \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\"" Nov 1 00:22:15.214932 containerd[1460]: time="2025-11-01T00:22:15.214497982Z" level=info msg="Forcibly stopping sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\"" Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.263 [WARNING][4959] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" WorkloadEndpoint="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.263 [INFO][4959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.264 [INFO][4959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" iface="eth0" netns="" Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.264 [INFO][4959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.264 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.297 [INFO][4966] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" HandleID="k8s-pod-network.18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.297 [INFO][4966] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.297 [INFO][4966] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.309 [WARNING][4966] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" HandleID="k8s-pod-network.18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.309 [INFO][4966] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" HandleID="k8s-pod-network.18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-whisker--58c88756--g94p6-eth0" Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.311 [INFO][4966] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.314835 containerd[1460]: 2025-11-01 00:22:15.313 [INFO][4959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3" Nov 1 00:22:15.315758 containerd[1460]: time="2025-11-01T00:22:15.315693695Z" level=info msg="TearDown network for sandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\" successfully" Nov 1 00:22:15.321331 containerd[1460]: time="2025-11-01T00:22:15.321261757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:15.321623 containerd[1460]: time="2025-11-01T00:22:15.321343745Z" level=info msg="RemovePodSandbox \"18b4feced7c70138c1c331f6f34da1893b523cb1efb5d689c57a5e70632c03c3\" returns successfully" Nov 1 00:22:15.322045 containerd[1460]: time="2025-11-01T00:22:15.321996954Z" level=info msg="StopPodSandbox for \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\"" Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.375 [WARNING][4980] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0", GenerateName:"calico-kube-controllers-785c977b7d-", Namespace:"calico-system", SelfLink:"", UID:"047da681-0394-40f3-ae91-7205aadc4ab4", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"785c977b7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719", Pod:"calico-kube-controllers-785c977b7d-jc8q5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b5906a06ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.375 [INFO][4980] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.375 [INFO][4980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" iface="eth0" netns="" Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.375 [INFO][4980] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.375 [INFO][4980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.403 [INFO][4987] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" HandleID="k8s-pod-network.750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.403 [INFO][4987] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.403 [INFO][4987] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.415 [WARNING][4987] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" HandleID="k8s-pod-network.750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.415 [INFO][4987] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" HandleID="k8s-pod-network.750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.418 [INFO][4987] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.429571 containerd[1460]: 2025-11-01 00:22:15.425 [INFO][4980] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:15.431162 containerd[1460]: time="2025-11-01T00:22:15.431120458Z" level=info msg="TearDown network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\" successfully" Nov 1 00:22:15.431162 containerd[1460]: time="2025-11-01T00:22:15.431159516Z" level=info msg="StopPodSandbox for \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\" returns successfully" Nov 1 00:22:15.431856 containerd[1460]: time="2025-11-01T00:22:15.431824993Z" level=info msg="RemovePodSandbox for \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\"" Nov 1 00:22:15.432696 containerd[1460]: time="2025-11-01T00:22:15.431862458Z" level=info msg="Forcibly stopping sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\"" Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.497 [WARNING][5001] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0", GenerateName:"calico-kube-controllers-785c977b7d-", Namespace:"calico-system", SelfLink:"", UID:"047da681-0394-40f3-ae91-7205aadc4ab4", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"785c977b7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"1ca657539a321e3d5d3477a6abd17d91e6ba4240a59142491a29b5afd8e7d719", Pod:"calico-kube-controllers-785c977b7d-jc8q5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8b5906a06ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.497 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.497 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" iface="eth0" netns="" Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.497 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.497 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.525 [INFO][5008] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" HandleID="k8s-pod-network.750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.525 [INFO][5008] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.525 [INFO][5008] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.535 [WARNING][5008] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" HandleID="k8s-pod-network.750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.535 [INFO][5008] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" HandleID="k8s-pod-network.750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--kube--controllers--785c977b7d--jc8q5-eth0" Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.537 [INFO][5008] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.542644 containerd[1460]: 2025-11-01 00:22:15.539 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9" Nov 1 00:22:15.542644 containerd[1460]: time="2025-11-01T00:22:15.541023653Z" level=info msg="TearDown network for sandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\" successfully" Nov 1 00:22:15.546338 containerd[1460]: time="2025-11-01T00:22:15.546277348Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:15.546654 containerd[1460]: time="2025-11-01T00:22:15.546364980Z" level=info msg="RemovePodSandbox \"750c4c34498c1e450c295586e754e3515a9b8b7eb699ec0d18867632a2ec99a9\" returns successfully" Nov 1 00:22:15.547499 containerd[1460]: time="2025-11-01T00:22:15.547449628Z" level=info msg="StopPodSandbox for \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\"" Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.593 [WARNING][5022] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4b15d46f-f330-471a-8fc4-3dc35af1a685", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54", Pod:"goldmane-7c778bb748-gflnh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f496f455e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.594 [INFO][5022] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.594 [INFO][5022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" iface="eth0" netns="" Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.594 [INFO][5022] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.594 [INFO][5022] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.622 [INFO][5030] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" HandleID="k8s-pod-network.3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.622 [INFO][5030] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.622 [INFO][5030] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.631 [WARNING][5030] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" HandleID="k8s-pod-network.3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.631 [INFO][5030] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" HandleID="k8s-pod-network.3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.633 [INFO][5030] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.636433 containerd[1460]: 2025-11-01 00:22:15.634 [INFO][5022] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:15.637200 containerd[1460]: time="2025-11-01T00:22:15.636491639Z" level=info msg="TearDown network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\" successfully" Nov 1 00:22:15.637200 containerd[1460]: time="2025-11-01T00:22:15.636526034Z" level=info msg="StopPodSandbox for \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\" returns successfully" Nov 1 00:22:15.637314 containerd[1460]: time="2025-11-01T00:22:15.637191497Z" level=info msg="RemovePodSandbox for \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\"" Nov 1 00:22:15.637314 containerd[1460]: time="2025-11-01T00:22:15.637230315Z" level=info msg="Forcibly stopping sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\"" Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.691 [WARNING][5044] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4b15d46f-f330-471a-8fc4-3dc35af1a685", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"8aeab191857e211d04ba5d79818f054d84ed1828a9b97843517f6d479545ff54", Pod:"goldmane-7c778bb748-gflnh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f496f455e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.691 [INFO][5044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.691 [INFO][5044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" iface="eth0" netns="" Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.691 [INFO][5044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.691 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.720 [INFO][5051] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" HandleID="k8s-pod-network.3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.720 [INFO][5051] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.720 [INFO][5051] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.732 [WARNING][5051] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" HandleID="k8s-pod-network.3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.732 [INFO][5051] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" HandleID="k8s-pod-network.3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-goldmane--7c778bb748--gflnh-eth0" Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.734 [INFO][5051] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.737658 containerd[1460]: 2025-11-01 00:22:15.736 [INFO][5044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2" Nov 1 00:22:15.737658 containerd[1460]: time="2025-11-01T00:22:15.737636751Z" level=info msg="TearDown network for sandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\" successfully" Nov 1 00:22:15.743203 containerd[1460]: time="2025-11-01T00:22:15.743113635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:15.743360 containerd[1460]: time="2025-11-01T00:22:15.743223654Z" level=info msg="RemovePodSandbox \"3a58ed7eb9b316b36fbedaf7f3c55d2fefe7c5d5514c153eea545c7a9b4a18b2\" returns successfully" Nov 1 00:22:15.744155 containerd[1460]: time="2025-11-01T00:22:15.744099566Z" level=info msg="StopPodSandbox for \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\"" Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.792 [WARNING][5065] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0", GenerateName:"calico-apiserver-55b4c78ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"af0d3006-44cf-49fd-af6f-37984237612e", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4c78ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc", Pod:"calico-apiserver-55b4c78ffc-tjjrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b5c0f4195c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.793 [INFO][5065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.793 [INFO][5065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" iface="eth0" netns="" Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.793 [INFO][5065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.793 [INFO][5065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.819 [INFO][5072] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" HandleID="k8s-pod-network.d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.819 [INFO][5072] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.820 [INFO][5072] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.829 [WARNING][5072] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" HandleID="k8s-pod-network.d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.829 [INFO][5072] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" HandleID="k8s-pod-network.d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.831 [INFO][5072] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.834972 containerd[1460]: 2025-11-01 00:22:15.833 [INFO][5065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:15.834972 containerd[1460]: time="2025-11-01T00:22:15.834914993Z" level=info msg="TearDown network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\" successfully" Nov 1 00:22:15.836713 containerd[1460]: time="2025-11-01T00:22:15.836659778Z" level=info msg="StopPodSandbox for \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\" returns successfully" Nov 1 00:22:15.837701 containerd[1460]: time="2025-11-01T00:22:15.837545627Z" level=info msg="RemovePodSandbox for \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\"" Nov 1 00:22:15.838111 containerd[1460]: time="2025-11-01T00:22:15.838020345Z" level=info msg="Forcibly stopping sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\"" Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.885 [WARNING][5087] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0", GenerateName:"calico-apiserver-55b4c78ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"af0d3006-44cf-49fd-af6f-37984237612e", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4c78ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"369150aeb7669c6f24f2b05e78d6fc3956d023e64241cf7288ea3a29ca25dafc", Pod:"calico-apiserver-55b4c78ffc-tjjrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b5c0f4195c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.885 [INFO][5087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.885 [INFO][5087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" iface="eth0" netns="" Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.885 [INFO][5087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.885 [INFO][5087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.917 [INFO][5095] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" HandleID="k8s-pod-network.d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.917 [INFO][5095] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.917 [INFO][5095] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.926 [WARNING][5095] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" HandleID="k8s-pod-network.d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.926 [INFO][5095] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" HandleID="k8s-pod-network.d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--tjjrz-eth0" Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.928 [INFO][5095] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:15.931130 containerd[1460]: 2025-11-01 00:22:15.929 [INFO][5087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76" Nov 1 00:22:15.931130 containerd[1460]: time="2025-11-01T00:22:15.931100887Z" level=info msg="TearDown network for sandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\" successfully" Nov 1 00:22:15.936125 containerd[1460]: time="2025-11-01T00:22:15.936056531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:15.936307 containerd[1460]: time="2025-11-01T00:22:15.936139255Z" level=info msg="RemovePodSandbox \"d24a6bea3b167998636fb4e8c86347409477831abad3682b1de809617e43fc76\" returns successfully" Nov 1 00:22:15.936818 containerd[1460]: time="2025-11-01T00:22:15.936782261Z" level=info msg="StopPodSandbox for \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\"" Nov 1 00:22:15.967187 containerd[1460]: time="2025-11-01T00:22:15.966874099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:15.994 [WARNING][5109] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcea8374-b606-49c4-b6e2-18f85c1c70c0", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1", Pod:"coredns-66bc5c9577-mbcv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22a94cf026c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:15.994 [INFO][5109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:15.994 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" iface="eth0" netns="" Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:15.994 [INFO][5109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:15.994 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:16.023 [INFO][5117] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" HandleID="k8s-pod-network.2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:16.023 [INFO][5117] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:16.023 [INFO][5117] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:16.032 [WARNING][5117] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" HandleID="k8s-pod-network.2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:16.032 [INFO][5117] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" HandleID="k8s-pod-network.2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:16.034 [INFO][5117] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:16.038803 containerd[1460]: 2025-11-01 00:22:16.036 [INFO][5109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:16.041141 containerd[1460]: time="2025-11-01T00:22:16.039329024Z" level=info msg="TearDown network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\" successfully" Nov 1 00:22:16.041141 containerd[1460]: time="2025-11-01T00:22:16.039364910Z" level=info msg="StopPodSandbox for \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\" returns successfully" Nov 1 00:22:16.041141 containerd[1460]: time="2025-11-01T00:22:16.039962552Z" level=info msg="RemovePodSandbox for \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\"" Nov 1 00:22:16.041141 containerd[1460]: time="2025-11-01T00:22:16.040001982Z" level=info msg="Forcibly stopping sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\"" Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.090 [WARNING][5131] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bcea8374-b606-49c4-b6e2-18f85c1c70c0", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"87037fe3d9481b38e073f3ec9abf2e46a5c26592517f23ff65f851058bc479f1", Pod:"coredns-66bc5c9577-mbcv9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22a94cf026c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.090 [INFO][5131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.090 [INFO][5131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" iface="eth0" netns="" Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.090 [INFO][5131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.090 [INFO][5131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.119 [INFO][5139] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" HandleID="k8s-pod-network.2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.119 [INFO][5139] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.119 [INFO][5139] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.129 [WARNING][5139] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" HandleID="k8s-pod-network.2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.129 [INFO][5139] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" HandleID="k8s-pod-network.2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-coredns--66bc5c9577--mbcv9-eth0" Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.133 [INFO][5139] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:16.137110 containerd[1460]: 2025-11-01 00:22:16.135 [INFO][5131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44" Nov 1 00:22:16.138334 containerd[1460]: time="2025-11-01T00:22:16.137159833Z" level=info msg="TearDown network for sandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\" successfully" Nov 1 00:22:16.142233 containerd[1460]: time="2025-11-01T00:22:16.142175708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:16.142382 containerd[1460]: time="2025-11-01T00:22:16.142252024Z" level=info msg="RemovePodSandbox \"2e8e2664f6835b6388f64d60509240fa64a28cfb95c48ecb4929f07d39fcfd44\" returns successfully" Nov 1 00:22:16.143358 containerd[1460]: time="2025-11-01T00:22:16.142943479Z" level=info msg="StopPodSandbox for \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\"" Nov 1 00:22:16.166885 containerd[1460]: time="2025-11-01T00:22:16.166828960Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:16.169136 containerd[1460]: time="2025-11-01T00:22:16.168924507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:22:16.169136 containerd[1460]: time="2025-11-01T00:22:16.168984184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:16.170671 kubelet[2603]: E1101 00:22:16.169644 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:16.170671 kubelet[2603]: E1101 00:22:16.169855 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:16.170671 kubelet[2603]: E1101 00:22:16.169977 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-785c977b7d-jc8q5_calico-system(047da681-0394-40f3-ae91-7205aadc4ab4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:16.170671 kubelet[2603]: E1101 00:22:16.170034 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.196 [WARNING][5153] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a13cec52-774e-41dd-8b73-7a0c3559c1e0", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d", Pod:"csi-node-driver-cvqzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f8734ea2c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.196 [INFO][5153] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.196 [INFO][5153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" iface="eth0" netns="" Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.196 [INFO][5153] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.196 [INFO][5153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.229 [INFO][5161] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" HandleID="k8s-pod-network.001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.229 [INFO][5161] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.229 [INFO][5161] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.238 [WARNING][5161] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" HandleID="k8s-pod-network.001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.238 [INFO][5161] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" HandleID="k8s-pod-network.001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.240 [INFO][5161] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:16.243511 containerd[1460]: 2025-11-01 00:22:16.242 [INFO][5153] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:16.244182 containerd[1460]: time="2025-11-01T00:22:16.243559586Z" level=info msg="TearDown network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\" successfully" Nov 1 00:22:16.244182 containerd[1460]: time="2025-11-01T00:22:16.243625859Z" level=info msg="StopPodSandbox for \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\" returns successfully" Nov 1 00:22:16.245350 containerd[1460]: time="2025-11-01T00:22:16.244863572Z" level=info msg="RemovePodSandbox for \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\"" Nov 1 00:22:16.245350 containerd[1460]: time="2025-11-01T00:22:16.244910879Z" level=info msg="Forcibly stopping sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\"" Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.294 [WARNING][5175] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a13cec52-774e-41dd-8b73-7a0c3559c1e0", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"e5d7144e272301af42dffcaa1769de639a817e2db3351a12391b9f41edc0564d", Pod:"csi-node-driver-cvqzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f8734ea2c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.294 [INFO][5175] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.295 [INFO][5175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" iface="eth0" netns="" Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.295 [INFO][5175] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.295 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.323 [INFO][5182] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" HandleID="k8s-pod-network.001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.323 [INFO][5182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.323 [INFO][5182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.332 [WARNING][5182] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" HandleID="k8s-pod-network.001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.332 [INFO][5182] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" HandleID="k8s-pod-network.001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-csi--node--driver--cvqzr-eth0" Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.334 [INFO][5182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:16.338639 containerd[1460]: 2025-11-01 00:22:16.336 [INFO][5175] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729" Nov 1 00:22:16.339512 containerd[1460]: time="2025-11-01T00:22:16.338734247Z" level=info msg="TearDown network for sandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\" successfully" Nov 1 00:22:16.350365 containerd[1460]: time="2025-11-01T00:22:16.349576310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:16.350365 containerd[1460]: time="2025-11-01T00:22:16.349761008Z" level=info msg="RemovePodSandbox \"001bbde9d40c3f38376e2811e52bbd1d183ce55b4ff07dee1dea22be0048c729\" returns successfully" Nov 1 00:22:16.351857 containerd[1460]: time="2025-11-01T00:22:16.351424800Z" level=info msg="StopPodSandbox for \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\"" Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.405 [WARNING][5196] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0", GenerateName:"calico-apiserver-55b4c78ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"160577c0-dc7a-4380-a5c7-096e0298b76b", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4c78ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218", Pod:"calico-apiserver-55b4c78ffc-92jbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali013bdbd72e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.406 [INFO][5196] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.406 [INFO][5196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" iface="eth0" netns="" Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.406 [INFO][5196] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.406 [INFO][5196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.446 [INFO][5205] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" HandleID="k8s-pod-network.d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.447 [INFO][5205] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.447 [INFO][5205] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.455 [WARNING][5205] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" HandleID="k8s-pod-network.d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.455 [INFO][5205] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" HandleID="k8s-pod-network.d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.457 [INFO][5205] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:16.463101 containerd[1460]: 2025-11-01 00:22:16.458 [INFO][5196] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:16.463101 containerd[1460]: time="2025-11-01T00:22:16.460366043Z" level=info msg="TearDown network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\" successfully" Nov 1 00:22:16.463101 containerd[1460]: time="2025-11-01T00:22:16.460400579Z" level=info msg="StopPodSandbox for \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\" returns successfully" Nov 1 00:22:16.463101 containerd[1460]: time="2025-11-01T00:22:16.461042548Z" level=info msg="RemovePodSandbox for \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\"" Nov 1 00:22:16.463101 containerd[1460]: time="2025-11-01T00:22:16.461085776Z" level=info msg="Forcibly stopping sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\"" Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.515 [WARNING][5219] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0", GenerateName:"calico-apiserver-55b4c78ffc-", Namespace:"calico-apiserver", SelfLink:"", UID:"160577c0-dc7a-4380-a5c7-096e0298b76b", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b4c78ffc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251031-2100-b76d94f53a5d8f5471e9", ContainerID:"e5158a14ea07b449ffc7cb62c34161defda411f03eccd97bdf121171208f1218", Pod:"calico-apiserver-55b4c78ffc-92jbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali013bdbd72e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.515 [INFO][5219] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.515 [INFO][5219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" iface="eth0" netns="" Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.515 [INFO][5219] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.515 [INFO][5219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.543 [INFO][5226] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" HandleID="k8s-pod-network.d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.543 [INFO][5226] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.543 [INFO][5226] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.553 [WARNING][5226] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" HandleID="k8s-pod-network.d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.553 [INFO][5226] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" HandleID="k8s-pod-network.d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Workload="ci--4081--3--6--nightly--20251031--2100--b76d94f53a5d8f5471e9-k8s-calico--apiserver--55b4c78ffc--92jbd-eth0" Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.554 [INFO][5226] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:16.557669 containerd[1460]: 2025-11-01 00:22:16.556 [INFO][5219] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93" Nov 1 00:22:16.559708 containerd[1460]: time="2025-11-01T00:22:16.557730051Z" level=info msg="TearDown network for sandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\" successfully" Nov 1 00:22:16.563251 containerd[1460]: time="2025-11-01T00:22:16.563207024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:22:16.563356 containerd[1460]: time="2025-11-01T00:22:16.563286833Z" level=info msg="RemovePodSandbox \"d5874ac3d29604af6becad595253d1e586b464328022377a522b967c2f30ea93\" returns successfully" Nov 1 00:22:17.962853 containerd[1460]: time="2025-11-01T00:22:17.962518332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:18.169419 containerd[1460]: time="2025-11-01T00:22:18.169346549Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:18.171682 containerd[1460]: time="2025-11-01T00:22:18.171507121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:18.171827 containerd[1460]: time="2025-11-01T00:22:18.171570498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:18.172200 kubelet[2603]: E1101 00:22:18.172067 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:18.172200 kubelet[2603]: E1101 00:22:18.172130 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:18.172838 kubelet[2603]: E1101 00:22:18.172227 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-55b4c78ffc-92jbd_calico-apiserver(160577c0-dc7a-4380-a5c7-096e0298b76b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:18.172838 kubelet[2603]: E1101 00:22:18.172280 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:22:19.963374 containerd[1460]: time="2025-11-01T00:22:19.962943547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:22:20.165896 containerd[1460]: time="2025-11-01T00:22:20.165815443Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:20.167696 containerd[1460]: time="2025-11-01T00:22:20.167635442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:22:20.167967 containerd[1460]: time="2025-11-01T00:22:20.167673948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:22:20.168118 kubelet[2603]: E1101 00:22:20.167946 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:20.168118 kubelet[2603]: E1101 00:22:20.168004 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:20.168735 kubelet[2603]: E1101 00:22:20.168369 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:20.170552 containerd[1460]: time="2025-11-01T00:22:20.170514269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:22:20.356718 containerd[1460]: time="2025-11-01T00:22:20.355562282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:20.360203 containerd[1460]: time="2025-11-01T00:22:20.360074533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:22:20.360335 containerd[1460]: time="2025-11-01T00:22:20.360259445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:22:20.360529 kubelet[2603]: E1101 00:22:20.360479 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:20.360648 kubelet[2603]: E1101 00:22:20.360546 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:20.360741 kubelet[2603]: E1101 00:22:20.360673 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:20.360861 kubelet[2603]: E1101 00:22:20.360737 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:22:21.962442 containerd[1460]: time="2025-11-01T00:22:21.962111998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:22:22.181926 containerd[1460]: time="2025-11-01T00:22:22.181846919Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:22.183810 containerd[1460]: time="2025-11-01T00:22:22.183744663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:22:22.184164 containerd[1460]: time="2025-11-01T00:22:22.183881668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:22.184277 kubelet[2603]: E1101 00:22:22.184062 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:22.184277 kubelet[2603]: E1101 00:22:22.184125 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:22.184277 kubelet[2603]: E1101 00:22:22.184235 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-gflnh_calico-system(4b15d46f-f330-471a-8fc4-3dc35af1a685): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:22.184927 kubelet[2603]: E1101 00:22:22.184286 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:22:22.963318 containerd[1460]: time="2025-11-01T00:22:22.963099791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:23.164569 containerd[1460]: time="2025-11-01T00:22:23.164487399Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:23.166281 containerd[1460]: time="2025-11-01T00:22:23.166182904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:23.166522 containerd[1460]: time="2025-11-01T00:22:23.166243697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:23.166582 kubelet[2603]: E1101 00:22:23.166525 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:23.166697 kubelet[2603]: E1101 00:22:23.166615 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:23.166767 kubelet[2603]: E1101 00:22:23.166722 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-55b4c78ffc-tjjrz_calico-apiserver(af0d3006-44cf-49fd-af6f-37984237612e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:23.166819 kubelet[2603]: E1101 00:22:23.166776 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:22:26.966937 kubelet[2603]: E1101 00:22:26.966797 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b64bcb7c8-98hnq" podUID="d224b46c-00ee-4398-aed6-0fb0a4fe6275" Nov 1 00:22:29.962604 kubelet[2603]: E1101 00:22:29.962022 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:22:30.965407 kubelet[2603]: E1101 00:22:30.965340 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:22:31.006083 systemd[1]: Started sshd@9-10.128.0.8:22-147.75.109.163:47040.service - OpenSSH per-connection server daemon (147.75.109.163:47040). Nov 1 00:22:31.312723 sshd[5268]: Accepted publickey for core from 147.75.109.163 port 47040 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:22:31.314748 sshd[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:31.320615 systemd-logind[1442]: New session 10 of user core. Nov 1 00:22:31.327848 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:22:31.662328 sshd[5268]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:31.671640 systemd[1]: sshd@9-10.128.0.8:22-147.75.109.163:47040.service: Deactivated successfully. Nov 1 00:22:31.676380 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:22:31.680188 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:22:31.682663 systemd-logind[1442]: Removed session 10. Nov 1 00:22:32.966293 kubelet[2603]: E1101 00:22:32.966066 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:22:35.965305 kubelet[2603]: E1101 00:22:35.965213 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:22:36.720005 systemd[1]: Started sshd@10-10.128.0.8:22-147.75.109.163:47054.service - OpenSSH per-connection server daemon (147.75.109.163:47054). Nov 1 00:22:36.968822 kubelet[2603]: E1101 00:22:36.968756 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:22:37.007526 sshd[5290]: Accepted publickey for core from 147.75.109.163 port 47054 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:22:37.009806 sshd[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:37.016949 systemd-logind[1442]: New session 11 of user core. Nov 1 00:22:37.024890 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:22:37.333219 sshd[5290]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:37.339492 systemd[1]: sshd@10-10.128.0.8:22-147.75.109.163:47054.service: Deactivated successfully. Nov 1 00:22:37.344386 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:22:37.347305 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:22:37.350667 systemd-logind[1442]: Removed session 11. Nov 1 00:22:38.964772 containerd[1460]: time="2025-11-01T00:22:38.964651955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:22:39.181140 containerd[1460]: time="2025-11-01T00:22:39.180993986Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:39.183332 containerd[1460]: time="2025-11-01T00:22:39.183069906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:22:39.183332 containerd[1460]: time="2025-11-01T00:22:39.183197574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:22:39.184962 kubelet[2603]: E1101 00:22:39.183574 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:39.184962 kubelet[2603]: E1101 00:22:39.183651 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:39.184962 kubelet[2603]: E1101 00:22:39.183756 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b64bcb7c8-98hnq_calico-system(d224b46c-00ee-4398-aed6-0fb0a4fe6275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:39.186954 containerd[1460]: time="2025-11-01T00:22:39.186453821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:22:39.397867 containerd[1460]: time="2025-11-01T00:22:39.397794664Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:39.399673 containerd[1460]: time="2025-11-01T00:22:39.399563328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:22:39.399807 containerd[1460]: time="2025-11-01T00:22:39.399606562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:39.400108 kubelet[2603]: E1101 00:22:39.400054 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:39.400215 kubelet[2603]: E1101 00:22:39.400122 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:39.400280 kubelet[2603]: E1101 00:22:39.400224 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b64bcb7c8-98hnq_calico-system(d224b46c-00ee-4398-aed6-0fb0a4fe6275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:39.400347 kubelet[2603]: E1101 00:22:39.400288 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b64bcb7c8-98hnq" podUID="d224b46c-00ee-4398-aed6-0fb0a4fe6275" Nov 1 00:22:41.963546 containerd[1460]: time="2025-11-01T00:22:41.963005611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:22:42.170621 containerd[1460]: time="2025-11-01T00:22:42.170505833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:42.172381 containerd[1460]: time="2025-11-01T00:22:42.172316271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:22:42.172739 containerd[1460]: time="2025-11-01T00:22:42.172360628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:42.172838 kubelet[2603]: E1101 00:22:42.172667 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:42.172838 kubelet[2603]: E1101 00:22:42.172727 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:42.173429 kubelet[2603]: E1101 00:22:42.172848 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-785c977b7d-jc8q5_calico-system(047da681-0394-40f3-ae91-7205aadc4ab4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:42.173429 kubelet[2603]: E1101 00:22:42.172896 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:22:42.387868 systemd[1]: Started sshd@11-10.128.0.8:22-147.75.109.163:43472.service - OpenSSH per-connection server daemon (147.75.109.163:43472). Nov 1 00:22:42.689519 sshd[5305]: Accepted publickey for core from 147.75.109.163 port 43472 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:22:42.691802 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:42.698565 systemd-logind[1442]: New session 12 of user core. Nov 1 00:22:42.705844 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:22:42.994478 sshd[5305]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:43.000521 systemd[1]: sshd@11-10.128.0.8:22-147.75.109.163:43472.service: Deactivated successfully. Nov 1 00:22:43.004469 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:22:43.005814 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:22:43.008354 systemd-logind[1442]: Removed session 12. Nov 1 00:22:43.963858 containerd[1460]: time="2025-11-01T00:22:43.963786130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:44.184449 containerd[1460]: time="2025-11-01T00:22:44.184364087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:44.186608 containerd[1460]: time="2025-11-01T00:22:44.186447867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:44.186608 containerd[1460]: time="2025-11-01T00:22:44.186511118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:44.186809 kubelet[2603]: E1101 00:22:44.186756 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:44.187356 kubelet[2603]: E1101 00:22:44.186816 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:44.187356 kubelet[2603]: E1101 00:22:44.186919 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-55b4c78ffc-92jbd_calico-apiserver(160577c0-dc7a-4380-a5c7-096e0298b76b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:44.187356 kubelet[2603]: E1101 00:22:44.186986 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:22:45.962811 containerd[1460]: time="2025-11-01T00:22:45.962570502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:22:46.163239 containerd[1460]: time="2025-11-01T00:22:46.163174346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:46.165257 containerd[1460]: time="2025-11-01T00:22:46.165194321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:22:46.165443 containerd[1460]: time="2025-11-01T00:22:46.165307038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:22:46.165626 kubelet[2603]: E1101 00:22:46.165533 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:46.166121 kubelet[2603]: E1101 00:22:46.165625 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:46.166121 kubelet[2603]: E1101 00:22:46.165732 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:46.168946 containerd[1460]: time="2025-11-01T00:22:46.168493131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:22:46.378323 containerd[1460]: time="2025-11-01T00:22:46.378155253Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:46.380791 containerd[1460]: time="2025-11-01T00:22:46.380655880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:22:46.380791 containerd[1460]: time="2025-11-01T00:22:46.380711269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:22:46.381096 kubelet[2603]: E1101 00:22:46.380972 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:46.381096 kubelet[2603]: E1101 00:22:46.381034 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:46.381320 kubelet[2603]: E1101 00:22:46.381142 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:46.381320 kubelet[2603]: E1101 00:22:46.381210 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:22:48.050989 systemd[1]: Started sshd@12-10.128.0.8:22-147.75.109.163:43482.service - OpenSSH per-connection server daemon (147.75.109.163:43482). Nov 1 00:22:48.346241 sshd[5327]: Accepted publickey for core from 147.75.109.163 port 43482 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:22:48.348219 sshd[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:48.354354 systemd-logind[1442]: New session 13 of user core. Nov 1 00:22:48.357826 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:22:48.645943 sshd[5327]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:48.650812 systemd[1]: sshd@12-10.128.0.8:22-147.75.109.163:43482.service: Deactivated successfully. Nov 1 00:22:48.653959 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:22:48.656851 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:22:48.659033 systemd-logind[1442]: Removed session 13. Nov 1 00:22:48.703004 systemd[1]: Started sshd@13-10.128.0.8:22-147.75.109.163:43490.service - OpenSSH per-connection server daemon (147.75.109.163:43490). Nov 1 00:22:48.986553 sshd[5341]: Accepted publickey for core from 147.75.109.163 port 43490 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:22:48.988696 sshd[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:48.995773 systemd-logind[1442]: New session 14 of user core. Nov 1 00:22:48.998826 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:22:49.328079 sshd[5341]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:49.334920 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:22:49.335467 systemd[1]: sshd@13-10.128.0.8:22-147.75.109.163:43490.service: Deactivated successfully. Nov 1 00:22:49.339386 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:22:49.340831 systemd-logind[1442]: Removed session 14. Nov 1 00:22:49.383095 systemd[1]: Started sshd@14-10.128.0.8:22-147.75.109.163:43506.service - OpenSSH per-connection server daemon (147.75.109.163:43506). Nov 1 00:22:49.668721 sshd[5352]: Accepted publickey for core from 147.75.109.163 port 43506 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:22:49.670807 sshd[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:49.676670 systemd-logind[1442]: New session 15 of user core. Nov 1 00:22:49.684873 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:22:49.964582 containerd[1460]: time="2025-11-01T00:22:49.963549526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:49.971773 sshd[5352]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:49.981224 systemd[1]: sshd@14-10.128.0.8:22-147.75.109.163:43506.service: Deactivated successfully. Nov 1 00:22:49.986922 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:22:49.988292 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:22:49.989975 systemd-logind[1442]: Removed session 15. Nov 1 00:22:50.168126 containerd[1460]: time="2025-11-01T00:22:50.168056642Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:50.170061 containerd[1460]: time="2025-11-01T00:22:50.169968592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:50.170242 containerd[1460]: time="2025-11-01T00:22:50.169976095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:50.170587 kubelet[2603]: E1101 00:22:50.170478 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:50.170587 kubelet[2603]: E1101 00:22:50.170539 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:50.171147 kubelet[2603]: E1101 00:22:50.170684 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-55b4c78ffc-tjjrz_calico-apiserver(af0d3006-44cf-49fd-af6f-37984237612e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:50.171147 kubelet[2603]: E1101 00:22:50.170743 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:22:51.963101 containerd[1460]: time="2025-11-01T00:22:51.962419991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:22:52.161631 containerd[1460]: time="2025-11-01T00:22:52.161545324Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:52.163762 containerd[1460]: time="2025-11-01T00:22:52.163684471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:22:52.163939 containerd[1460]: time="2025-11-01T00:22:52.163805609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:52.164234 kubelet[2603]: E1101 00:22:52.164163 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:52.164783 kubelet[2603]: E1101 00:22:52.164231 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:52.164783 kubelet[2603]: E1101 00:22:52.164339 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-gflnh_calico-system(4b15d46f-f330-471a-8fc4-3dc35af1a685): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:52.164783 kubelet[2603]: E1101 00:22:52.164403 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:22:52.967477 kubelet[2603]: E1101 00:22:52.967384 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b64bcb7c8-98hnq" podUID="d224b46c-00ee-4398-aed6-0fb0a4fe6275" Nov 1 00:22:54.963349 kubelet[2603]: E1101 00:22:54.962639 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:22:55.025976 systemd[1]: Started sshd@15-10.128.0.8:22-147.75.109.163:35902.service - OpenSSH per-connection server daemon (147.75.109.163:35902). Nov 1 00:22:55.319976 sshd[5372]: Accepted publickey for core from 147.75.109.163 port 35902 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:22:55.322158 sshd[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:55.329086 systemd-logind[1442]: New session 16 of user core. Nov 1 00:22:55.337854 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:22:55.614093 sshd[5372]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:55.620304 systemd[1]: sshd@15-10.128.0.8:22-147.75.109.163:35902.service: Deactivated successfully. Nov 1 00:22:55.623420 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:22:55.624748 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:22:55.626426 systemd-logind[1442]: Removed session 16. Nov 1 00:22:55.963469 kubelet[2603]: E1101 00:22:55.963395 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:22:59.963615 kubelet[2603]: E1101 00:22:59.963521 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:23:00.683479 systemd[1]: Started sshd@16-10.128.0.8:22-147.75.109.163:43656.service - OpenSSH per-connection server daemon (147.75.109.163:43656). Nov 1 00:23:01.009270 sshd[5407]: Accepted publickey for core from 147.75.109.163 port 43656 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:01.011531 sshd[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:01.024379 systemd-logind[1442]: New session 17 of user core. Nov 1 00:23:01.029877 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:23:01.344122 sshd[5407]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:01.355556 systemd[1]: sshd@16-10.128.0.8:22-147.75.109.163:43656.service: Deactivated successfully. Nov 1 00:23:01.360126 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:23:01.362247 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:23:01.364111 systemd-logind[1442]: Removed session 17. Nov 1 00:23:04.967943 kubelet[2603]: E1101 00:23:04.966735 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:23:04.967943 kubelet[2603]: E1101 00:23:04.967345 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:23:06.402986 systemd[1]: Started sshd@17-10.128.0.8:22-147.75.109.163:43658.service - OpenSSH per-connection server daemon (147.75.109.163:43658). Nov 1 00:23:06.712902 sshd[5423]: Accepted publickey for core from 147.75.109.163 port 43658 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:06.713793 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:06.725897 systemd-logind[1442]: New session 18 of user core. Nov 1 00:23:06.732258 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:23:07.110943 sshd[5423]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:07.120405 systemd[1]: sshd@17-10.128.0.8:22-147.75.109.163:43658.service: Deactivated successfully. Nov 1 00:23:07.126819 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:23:07.128438 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:23:07.130549 systemd-logind[1442]: Removed session 18. Nov 1 00:23:07.966093 kubelet[2603]: E1101 00:23:07.965795 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:23:07.971711 kubelet[2603]: E1101 00:23:07.970307 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b64bcb7c8-98hnq" podUID="d224b46c-00ee-4398-aed6-0fb0a4fe6275" Nov 1 00:23:08.967995 kubelet[2603]: E1101 00:23:08.967939 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:23:12.175207 systemd[1]: Started sshd@18-10.128.0.8:22-147.75.109.163:40876.service - OpenSSH per-connection server daemon (147.75.109.163:40876). Nov 1 00:23:12.493959 sshd[5437]: Accepted publickey for core from 147.75.109.163 port 40876 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:12.493439 sshd[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:12.503332 systemd-logind[1442]: New session 19 of user core. Nov 1 00:23:12.511884 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:23:12.893939 sshd[5437]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:12.900109 systemd[1]: sshd@18-10.128.0.8:22-147.75.109.163:40876.service: Deactivated successfully. Nov 1 00:23:12.905276 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:23:12.911801 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:23:12.914847 systemd-logind[1442]: Removed session 19. Nov 1 00:23:12.951083 systemd[1]: Started sshd@19-10.128.0.8:22-147.75.109.163:40878.service - OpenSSH per-connection server daemon (147.75.109.163:40878). Nov 1 00:23:13.258792 sshd[5450]: Accepted publickey for core from 147.75.109.163 port 40878 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:13.260414 sshd[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:13.268903 systemd-logind[1442]: New session 20 of user core. Nov 1 00:23:13.275849 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:23:13.694974 sshd[5450]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:13.702918 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:23:13.704680 systemd[1]: sshd@19-10.128.0.8:22-147.75.109.163:40878.service: Deactivated successfully. Nov 1 00:23:13.710853 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:23:13.717698 systemd-logind[1442]: Removed session 20. Nov 1 00:23:13.750776 systemd[1]: Started sshd@20-10.128.0.8:22-147.75.109.163:40884.service - OpenSSH per-connection server daemon (147.75.109.163:40884). Nov 1 00:23:13.967017 kubelet[2603]: E1101 00:23:13.966805 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:23:14.064627 sshd[5461]: Accepted publickey for core from 147.75.109.163 port 40884 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:14.067012 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:14.075758 systemd-logind[1442]: New session 21 of user core. Nov 1 00:23:14.084043 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:23:15.360880 sshd[5461]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:15.372137 systemd[1]: sshd@20-10.128.0.8:22-147.75.109.163:40884.service: Deactivated successfully. Nov 1 00:23:15.378204 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:23:15.381412 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:23:15.384388 systemd-logind[1442]: Removed session 21. Nov 1 00:23:15.423557 systemd[1]: Started sshd@21-10.128.0.8:22-147.75.109.163:40886.service - OpenSSH per-connection server daemon (147.75.109.163:40886). Nov 1 00:23:15.735650 sshd[5479]: Accepted publickey for core from 147.75.109.163 port 40886 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:15.737612 sshd[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:15.748153 systemd-logind[1442]: New session 22 of user core. Nov 1 00:23:15.757830 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:23:15.965333 kubelet[2603]: E1101 00:23:15.965080 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:23:16.321051 sshd[5479]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:16.328747 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:23:16.329501 systemd[1]: sshd@21-10.128.0.8:22-147.75.109.163:40886.service: Deactivated successfully. Nov 1 00:23:16.334458 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:23:16.340779 systemd-logind[1442]: Removed session 22. Nov 1 00:23:16.379005 systemd[1]: Started sshd@22-10.128.0.8:22-147.75.109.163:40896.service - OpenSSH per-connection server daemon (147.75.109.163:40896). Nov 1 00:23:16.696172 sshd[5491]: Accepted publickey for core from 147.75.109.163 port 40896 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:16.698649 sshd[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:16.712690 systemd-logind[1442]: New session 23 of user core. Nov 1 00:23:16.718912 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:23:17.024971 sshd[5491]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:17.036062 systemd[1]: sshd@22-10.128.0.8:22-147.75.109.163:40896.service: Deactivated successfully. Nov 1 00:23:17.042942 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:23:17.045782 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:23:17.048184 systemd-logind[1442]: Removed session 23. Nov 1 00:23:18.967886 kubelet[2603]: E1101 00:23:18.967810 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:23:19.964515 containerd[1460]: time="2025-11-01T00:23:19.963798449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:20.176038 containerd[1460]: time="2025-11-01T00:23:20.175775319Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:20.178180 containerd[1460]: time="2025-11-01T00:23:20.178032057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:20.178180 containerd[1460]: time="2025-11-01T00:23:20.178107506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:20.179265 kubelet[2603]: E1101 00:23:20.178798 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:20.179265 kubelet[2603]: E1101 00:23:20.178938 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:20.179265 kubelet[2603]: E1101 00:23:20.179196 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b64bcb7c8-98hnq_calico-system(d224b46c-00ee-4398-aed6-0fb0a4fe6275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:20.183142 containerd[1460]: time="2025-11-01T00:23:20.182784084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:20.394255 containerd[1460]: time="2025-11-01T00:23:20.394017020Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:20.396061 containerd[1460]: time="2025-11-01T00:23:20.395829607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:20.396205 containerd[1460]: time="2025-11-01T00:23:20.396004246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:20.397162 kubelet[2603]: E1101 00:23:20.396652 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:20.397162 kubelet[2603]: E1101 00:23:20.396717 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:20.397162 kubelet[2603]: E1101 00:23:20.396847 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b64bcb7c8-98hnq_calico-system(d224b46c-00ee-4398-aed6-0fb0a4fe6275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:20.397449 kubelet[2603]: E1101 00:23:20.396911 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b64bcb7c8-98hnq" podUID="d224b46c-00ee-4398-aed6-0fb0a4fe6275" Nov 1 00:23:21.964920 kubelet[2603]: E1101 00:23:21.964405 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b" Nov 1 00:23:22.081087 systemd[1]: Started sshd@23-10.128.0.8:22-147.75.109.163:54460.service - OpenSSH per-connection server daemon (147.75.109.163:54460). Nov 1 00:23:22.383452 sshd[5507]: Accepted publickey for core from 147.75.109.163 port 54460 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:22.386364 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:22.397489 systemd-logind[1442]: New session 24 of user core. Nov 1 00:23:22.400977 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:23:22.747996 sshd[5507]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:22.757660 systemd[1]: sshd@23-10.128.0.8:22-147.75.109.163:54460.service: Deactivated successfully. Nov 1 00:23:22.762622 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:23:22.764389 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:23:22.766865 systemd-logind[1442]: Removed session 24. Nov 1 00:23:22.963373 containerd[1460]: time="2025-11-01T00:23:22.963209812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:23.188104 containerd[1460]: time="2025-11-01T00:23:23.187835314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:23.189916 containerd[1460]: time="2025-11-01T00:23:23.189713980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:23.189916 containerd[1460]: time="2025-11-01T00:23:23.189846899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:23.191399 kubelet[2603]: E1101 00:23:23.190281 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:23.191399 kubelet[2603]: E1101 00:23:23.190339 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:23.191399 kubelet[2603]: E1101 00:23:23.190447 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-785c977b7d-jc8q5_calico-system(047da681-0394-40f3-ae91-7205aadc4ab4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:23.191399 kubelet[2603]: E1101 00:23:23.190493 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:23:27.808097 systemd[1]: Started sshd@24-10.128.0.8:22-147.75.109.163:54462.service - OpenSSH per-connection server daemon (147.75.109.163:54462). Nov 1 00:23:28.120342 sshd[5529]: Accepted publickey for core from 147.75.109.163 port 54462 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:28.120295 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:28.128054 systemd-logind[1442]: New session 25 of user core. Nov 1 00:23:28.137877 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:23:28.449953 sshd[5529]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:28.458225 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:23:28.459254 systemd[1]: sshd@24-10.128.0.8:22-147.75.109.163:54462.service: Deactivated successfully. Nov 1 00:23:28.463045 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:23:28.466125 systemd-logind[1442]: Removed session 25. Nov 1 00:23:28.966995 containerd[1460]: time="2025-11-01T00:23:28.966948310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:29.176491 containerd[1460]: time="2025-11-01T00:23:29.176230966Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:29.178214 containerd[1460]: time="2025-11-01T00:23:29.178027932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:29.178214 containerd[1460]: time="2025-11-01T00:23:29.178033727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:29.180620 kubelet[2603]: E1101 00:23:29.178570 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:29.180620 kubelet[2603]: E1101 00:23:29.178651 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:29.180620 kubelet[2603]: E1101 00:23:29.178751 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:29.184216 containerd[1460]: time="2025-11-01T00:23:29.183938878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:29.390033 containerd[1460]: time="2025-11-01T00:23:29.389971097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:29.391966 containerd[1460]: time="2025-11-01T00:23:29.391894626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:29.392130 containerd[1460]: time="2025-11-01T00:23:29.392021290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:29.392327 kubelet[2603]: E1101 00:23:29.392270 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:29.392440 kubelet[2603]: E1101 00:23:29.392345 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:29.392513 kubelet[2603]: E1101 00:23:29.392452 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cvqzr_calico-system(a13cec52-774e-41dd-8b73-7a0c3559c1e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:29.392680 kubelet[2603]: E1101 00:23:29.392521 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cvqzr" podUID="a13cec52-774e-41dd-8b73-7a0c3559c1e0" Nov 1 00:23:29.962905 kubelet[2603]: E1101 00:23:29.962778 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-gflnh" podUID="4b15d46f-f330-471a-8fc4-3dc35af1a685" Nov 1 00:23:29.962905 kubelet[2603]: E1101 00:23:29.962854 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-tjjrz" podUID="af0d3006-44cf-49fd-af6f-37984237612e" Nov 1 00:23:31.964614 kubelet[2603]: E1101 00:23:31.964528 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b64bcb7c8-98hnq" podUID="d224b46c-00ee-4398-aed6-0fb0a4fe6275" Nov 1 00:23:33.511578 systemd[1]: Started sshd@25-10.128.0.8:22-147.75.109.163:52576.service - OpenSSH per-connection server daemon (147.75.109.163:52576). Nov 1 00:23:33.825197 sshd[5564]: Accepted publickey for core from 147.75.109.163 port 52576 ssh2: RSA SHA256:lhvbxSuRd7ZdYPYXFffu3GmZzEM52Ht9qmTuaZaa8aE Nov 1 00:23:33.826838 sshd[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:33.836662 systemd-logind[1442]: New session 26 of user core. Nov 1 00:23:33.840835 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 00:23:34.201459 sshd[5564]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:34.209088 systemd[1]: sshd@25-10.128.0.8:22-147.75.109.163:52576.service: Deactivated successfully. Nov 1 00:23:34.216171 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:23:34.217884 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:23:34.221163 systemd-logind[1442]: Removed session 26. Nov 1 00:23:35.964671 kubelet[2603]: E1101 00:23:35.964607 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-785c977b7d-jc8q5" podUID="047da681-0394-40f3-ae91-7205aadc4ab4" Nov 1 00:23:35.965392 containerd[1460]: time="2025-11-01T00:23:35.964897830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:36.168329 containerd[1460]: time="2025-11-01T00:23:36.168266526Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:36.170026 containerd[1460]: time="2025-11-01T00:23:36.169913448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:36.170026 containerd[1460]: time="2025-11-01T00:23:36.169970842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:36.170980 kubelet[2603]: E1101 00:23:36.170881 2603 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:36.171117 kubelet[2603]: E1101 00:23:36.171007 2603 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:36.171869 kubelet[2603]: E1101 00:23:36.171817 2603 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-55b4c78ffc-92jbd_calico-apiserver(160577c0-dc7a-4380-a5c7-096e0298b76b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:36.172029 kubelet[2603]: E1101 00:23:36.171907 2603 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-55b4c78ffc-92jbd" podUID="160577c0-dc7a-4380-a5c7-096e0298b76b"